Jan 29 08:40:56 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 08:40:57 crc restorecon[4684]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:57 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:40:58 crc restorecon[4684]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 08:40:58 crc restorecon[4684]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 08:40:58 crc kubenswrapper[4895]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 08:40:58 crc kubenswrapper[4895]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 08:40:58 crc kubenswrapper[4895]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 08:40:58 crc kubenswrapper[4895]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 08:40:58 crc kubenswrapper[4895]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 08:40:58 crc kubenswrapper[4895]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.876082 4895 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894048 4895 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894096 4895 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894104 4895 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894111 4895 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894116 4895 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894121 4895 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894126 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894131 4895 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894135 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894140 4895 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894145 4895 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894150 4895 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894154 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894160 4895 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894164 4895 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894169 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894176 4895 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894181 4895 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894188 4895 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894195 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894202 4895 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894208 4895 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894215 4895 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894221 4895 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894227 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894235 4895 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894241 4895 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894247 4895 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894251 4895 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894256 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894263 4895 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894268 4895 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894273 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894279 4895 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894284 4895 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894289 4895 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894294 4895 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894299 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894304 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894309 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894315 4895 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894320 4895 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894325 4895 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894330 4895 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894336 4895 feature_gate.go:330] unrecognized feature gate: Example Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894340 4895 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894346 4895 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894350 4895 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894355 4895 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894360 4895 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894365 4895 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894370 4895 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894375 4895 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894381 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894385 4895 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894390 4895 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894394 4895 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894399 4895 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894403 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894407 4895 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894412 4895 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894417 4895 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894422 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894428 4895 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894436 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894442 4895 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894447 4895 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894452 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894457 4895 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894464 4895 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.894469 4895 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894595 4895 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894608 4895 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894618 4895 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894626 4895 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894634 4895 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894643 4895 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894651 4895 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894660 4895 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894666 4895 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894672 4895 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894678 4895 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894684 4895 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894690 4895 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894695 4895 flags.go:64] FLAG: --cgroup-root="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894701 4895 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894707 4895 flags.go:64] FLAG: --client-ca-file="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894712 4895 flags.go:64] FLAG: --cloud-config="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894717 4895 flags.go:64] FLAG: --cloud-provider="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894723 4895 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894729 4895 flags.go:64] FLAG: --cluster-domain="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894735 4895 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894740 4895 flags.go:64] FLAG: --config-dir="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894745 4895 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894751 4895 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894758 4895 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894764 4895 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894769 4895 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894775 4895 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894780 4895 flags.go:64] FLAG: --contention-profiling="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894786 4895 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894791 4895 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894796 4895 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894801 4895 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894808 4895 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894814 4895 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894819 4895 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894824 4895 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894830 4895 flags.go:64] FLAG: --enable-server="true" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894836 4895 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894843 4895 flags.go:64] FLAG: --event-burst="100" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894849 4895 flags.go:64] FLAG: --event-qps="50" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894855 4895 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894860 4895 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894865 4895 flags.go:64] FLAG: --eviction-hard="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894872 4895 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894878 4895 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894884 4895 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894891 4895 flags.go:64] FLAG: --eviction-soft="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894896 4895 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894902 4895 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894907 4895 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894929 4895 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894935 4895 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894940 4895 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894945 4895 flags.go:64] FLAG: --feature-gates="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894952 4895 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894957 4895 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894963 4895 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894969 4895 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894974 4895 flags.go:64] FLAG: --healthz-port="10248" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894980 4895 flags.go:64] FLAG: --help="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894985 4895 flags.go:64] FLAG: --hostname-override="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894990 4895 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.894996 4895 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895001 4895 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895006 4895 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895011 4895 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895016 4895 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895021 4895 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895027 4895 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895032 4895 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895038 4895 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895043 4895 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895048 4895 flags.go:64] FLAG: --kube-reserved="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895053 4895 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895060 4895 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895065 4895 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895070 4895 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895076 4895 flags.go:64] FLAG: --lock-file="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895081 4895 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895087 4895 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895092 4895 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895102 4895 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895108 4895 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895113 4895 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895118 4895 flags.go:64] FLAG: --logging-format="text" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895123 4895 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895129 4895 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895134 4895 flags.go:64] FLAG: --manifest-url="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895139 4895 flags.go:64] FLAG: --manifest-url-header="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895147 4895 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895153 4895 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895160 4895 flags.go:64] FLAG: --max-pods="110" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895166 4895 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895171 4895 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895176 4895 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895181 4895 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895187 4895 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895192 4895 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895197 4895 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895211 4895 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895217 4895 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895222 4895 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895228 4895 flags.go:64] FLAG: --pod-cidr="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895233 4895 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895241 4895 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895246 4895 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895253 4895 flags.go:64] FLAG: --pods-per-core="0" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895258 4895 flags.go:64] FLAG: --port="10250" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895263 4895 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895268 4895 flags.go:64] FLAG: --provider-id="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895274 4895 flags.go:64] FLAG: --qos-reserved="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895279 4895 flags.go:64] FLAG: --read-only-port="10255" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895284 4895 flags.go:64] FLAG: --register-node="true" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895289 4895 flags.go:64] FLAG: --register-schedulable="true" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895295 4895 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895305 4895 flags.go:64] FLAG: --registry-burst="10" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895310 4895 flags.go:64] FLAG: --registry-qps="5" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895315 4895 flags.go:64] FLAG: --reserved-cpus="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895321 4895 flags.go:64] FLAG: --reserved-memory="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895329 4895 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895334 4895 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895340 4895 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895345 4895 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895350 4895 flags.go:64] FLAG: --runonce="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895355 4895 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895361 4895 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895367 4895 flags.go:64] FLAG: --seccomp-default="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895372 4895 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895378 4895 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895384 4895 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895389 4895 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895394 4895 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895401 4895 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895407 4895 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895412 4895 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895417 4895 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895422 4895 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895427 4895 flags.go:64] FLAG: --system-cgroups="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895432 4895 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895441 4895 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895446 4895 flags.go:64] FLAG: --tls-cert-file="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895450 4895 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895458 4895 flags.go:64] FLAG: --tls-min-version="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895464 4895 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895468 4895 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895473 4895 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895478 4895 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895484 4895 flags.go:64] FLAG: --v="2" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895492 4895 flags.go:64] FLAG: --version="false" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895499 4895 flags.go:64] FLAG: --vmodule="" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895506 4895 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.895511 4895 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895667 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895676 4895 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895682 4895 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895687 4895 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895692 4895 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895697 4895 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895702 4895 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895707 4895 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895713 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895718 4895 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895723 4895 feature_gate.go:330] unrecognized feature gate: Example Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895731 4895 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895737 4895 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895742 4895 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895747 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895753 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895758 4895 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895763 4895 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895768 4895 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895773 4895 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895777 4895 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895782 4895 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895787 4895 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895792 4895 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895797 4895 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895802 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895807 4895 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895812 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895816 4895 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895821 4895 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895825 4895 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895830 4895 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895835 4895 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895839 4895 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895844 4895 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895848 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895853 4895 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895857 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895862 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895867 4895 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895872 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895876 4895 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895880 4895 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895885 4895 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895890 4895 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895894 4895 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895899 4895 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895903 4895 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895908 4895 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895934 4895 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895940 4895 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895947 4895 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895951 4895 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895957 4895 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895969 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895978 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895983 4895 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895988 4895 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895994 4895 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.895998 4895 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.896003 4895 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.896007 4895 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.896011 4895 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.896016 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.896020 4895 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.896025 4895 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.896029 4895 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.896033 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.896037 4895 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.896042 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.896046 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.900279 4895 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.910834 4895 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.910887 4895 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.910985 4895 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911008 4895 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911013 4895 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911017 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911020 4895 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911024 4895 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911032 4895 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911036 4895 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911041 4895 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911044 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911048 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911051 4895 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911055 4895 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911059 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911063 4895 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911066 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911085 4895 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911089 4895 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911094 4895 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911102 4895 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911106 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911110 4895 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911114 4895 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911118 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911122 4895 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911125 4895 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911129 4895 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911133 4895 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911137 4895 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911140 4895 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911144 4895 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911163 4895 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911167 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911172 4895 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911179 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911183 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911187 4895 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911190 4895 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911194 4895 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911199 4895 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911203 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911207 4895 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911210 4895 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911214 4895 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911218 4895 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911222 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911240 4895 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911244 4895 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911247 4895 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911251 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911254 4895 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911258 4895 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911262 4895 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911265 4895 feature_gate.go:330] unrecognized feature gate: Example Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911269 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911272 4895 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911276 4895 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911279 4895 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911283 4895 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911286 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911290 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911293 4895 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911297 4895 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911300 4895 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911318 4895 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911323 4895 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911329 4895 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911334 4895 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911339 4895 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911343 4895 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911348 4895 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.911356 4895 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911531 4895 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911552 4895 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911557 4895 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911561 4895 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911565 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911568 4895 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911572 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911575 4895 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911580 4895 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911585 4895 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911590 4895 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911594 4895 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911599 4895 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911605 4895 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911608 4895 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911612 4895 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911632 4895 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911636 4895 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911641 4895 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911645 4895 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911649 4895 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911653 4895 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911657 4895 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911662 4895 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911666 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911669 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911673 4895 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911677 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911681 4895 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911685 4895 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911689 4895 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911709 4895 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911714 4895 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911718 4895 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911722 4895 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911726 4895 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911730 4895 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911733 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911737 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911740 4895 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911744 4895 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911748 4895 feature_gate.go:330] unrecognized feature gate: Example Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911751 4895 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911755 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911759 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911762 4895 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911766 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911770 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911788 4895 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911792 4895 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911796 4895 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911799 4895 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911803 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911807 4895 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911810 4895 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911814 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911819 4895 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911823 4895 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911827 4895 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911831 4895 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911835 4895 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911839 4895 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911842 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911846 4895 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911863 4895 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911867 4895 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911871 4895 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911875 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911879 4895 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911883 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 08:40:58 crc kubenswrapper[4895]: W0129 08:40:58.911888 4895 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.911895 4895 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.914220 4895 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.921019 4895 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.921146 4895 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.923152 4895 server.go:997] "Starting client certificate rotation" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.923171 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.923400 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-25 23:22:57.478673958 +0000 UTC Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.923524 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.984588 4895 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 08:40:58 crc kubenswrapper[4895]: I0129 08:40:58.987342 4895 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 08:40:58 crc kubenswrapper[4895]: E0129 08:40:58.993148 4895 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.014207 4895 log.go:25] "Validated CRI v1 runtime API" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.085523 4895 log.go:25] "Validated CRI v1 image API" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.089394 4895 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.098863 4895 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-08-37-21-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.098954 4895 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.122215 4895 manager.go:217] Machine: {Timestamp:2026-01-29 08:40:59.11882088 +0000 UTC m=+0.760329056 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:1999941b-7422-4452-a2a1-4823b90b5d59 BootID:5dc976ab-cc38-4fc1-8149-00132186b0b4 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:3c:94:e4 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:3c:94:e4 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d7:27:9f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:8b:14:a0 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:84:36:d0 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e4:67:ee Speed:-1 Mtu:1496} {Name:eth10 MacAddress:66:4d:e2:86:a7:ce Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:9e:aa:95:4d:54:76 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.122584 4895 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.122783 4895 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.123225 4895 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.123470 4895 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.123524 4895 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.123870 4895 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.123889 4895 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.124596 4895 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.124644 4895 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.125485 4895 state_mem.go:36] "Initialized new in-memory state store" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.125619 4895 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.130082 4895 kubelet.go:418] "Attempting to sync node with API server" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.130139 4895 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.130177 4895 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.130202 4895 kubelet.go:324] "Adding apiserver pod source" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.130232 4895 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.134655 4895 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.135590 4895 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 08:40:59 crc kubenswrapper[4895]: W0129 08:40:59.139277 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.139372 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:40:59 crc kubenswrapper[4895]: W0129 08:40:59.139351 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.139496 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.139726 4895 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.141381 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.141422 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.141435 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.141446 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.141464 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.141476 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.141489 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.141507 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.141523 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.141538 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.141556 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.141568 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.142894 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.143542 4895 server.go:1280] "Started kubelet" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.144867 4895 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.144868 4895 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 08:40:59 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.145616 4895 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.145715 4895 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.146664 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.146722 4895 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.146755 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:22:36.136537733 +0000 UTC Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.146952 4895 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.153178 4895 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.153209 4895 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.153162 4895 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.161077 4895 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.161126 4895 factory.go:55] Registering systemd factory Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.161137 4895 factory.go:221] Registration of the systemd container factory successfully Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.161499 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" interval="200ms" Jan 29 08:40:59 crc kubenswrapper[4895]: W0129 08:40:59.161512 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.161609 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.161791 4895 factory.go:153] Registering CRI-O factory Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.161889 4895 factory.go:221] Registration of the crio container factory successfully Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.161994 4895 factory.go:103] Registering Raw factory Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.162061 4895 manager.go:1196] Started watching for new ooms in manager Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.162729 4895 manager.go:319] Starting recovery of all containers Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.162888 4895 server.go:460] "Adding debug handlers to kubelet server" Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.161623 4895 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.142:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f26ffe8c30403 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 08:40:59.143504899 +0000 UTC m=+0.785013065,LastTimestamp:2026-01-29 08:40:59.143504899 +0000 UTC m=+0.785013065,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169429 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169500 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169516 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169528 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169540 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169552 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169562 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169572 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169584 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169594 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169603 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169614 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169623 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169636 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169645 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169655 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169664 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169675 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169685 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169696 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169705 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169717 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169727 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169737 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169748 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169758 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169772 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169783 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.169795 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.171696 4895 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.172047 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.172079 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.172096 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.172110 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.172123 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.172148 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.172160 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.172184 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.172199 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.172214 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.172227 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.172238 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.172250 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.172261 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.173166 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.173222 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.173253 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.173269 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.173297 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.173322 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.173352 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.173365 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.173389 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.173408 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.173426 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.173447 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.173463 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.173479 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174275 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174309 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174321 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174335 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174347 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174359 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174386 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174398 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174437 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174463 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174476 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174504 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174531 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174542 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174573 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174585 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174611 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174623 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174634 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174647 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174661 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174691 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174703 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174716 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.174727 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.175465 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.175563 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.175639 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.175704 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.175772 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.175846 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.175931 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.176000 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.176122 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.176213 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.176290 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.176361 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.176424 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.176525 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.176611 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.176679 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.176772 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.176844 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.177031 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.177112 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.177192 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.177283 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.177393 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.177519 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.177651 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.177746 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.177828 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.178040 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.178156 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.178234 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.178341 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.178415 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.178498 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.178595 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.178680 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.178754 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.178840 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.178975 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.179062 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.179147 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.179224 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.179299 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.179383 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.179467 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.179551 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.179655 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.179766 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.179865 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.179975 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.180070 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.180173 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.180259 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.180346 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.180450 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.180557 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.180657 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.180785 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.180868 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.180954 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.181027 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.181090 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.181152 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.181213 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.181279 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.181356 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.181432 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.181515 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.181602 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.181695 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.181775 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.181840 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.181907 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.182003 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.182065 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.182132 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.182215 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.182301 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.182384 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.182480 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.182551 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.182629 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.182517 4895 manager.go:324] Recovery completed Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.182702 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.182843 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.183022 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.183107 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.183198 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.183289 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.183371 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.183438 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.183542 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.183625 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.183688 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.183754 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.183817 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.183879 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.183970 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.184065 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.184137 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.184202 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.184282 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.184363 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.184445 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.185606 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186059 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186180 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186236 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186278 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186313 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186362 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186401 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186447 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186477 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186518 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186558 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186589 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186631 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186674 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186706 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186748 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186804 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186846 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186868 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186890 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186944 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186966 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.186985 4895 reconstruct.go:97] "Volume reconstruction finished" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.187000 4895 reconciler.go:26] "Reconciler: start to sync state" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.196965 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.198781 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.198837 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.198851 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.199866 4895 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.199896 4895 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.199944 4895 state_mem.go:36] "Initialized new in-memory state store" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.207784 4895 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.209870 4895 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.209941 4895 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.209971 4895 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.210031 4895 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 08:40:59 crc kubenswrapper[4895]: W0129 08:40:59.211237 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.211335 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.217487 4895 policy_none.go:49] "None policy: Start" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.218436 4895 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.218551 4895 state_mem.go:35] "Initializing new in-memory state store" Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.247168 4895 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.280127 4895 manager.go:334] "Starting Device Plugin manager" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.280333 4895 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.280347 4895 server.go:79] "Starting device plugin registration server" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.280857 4895 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.280948 4895 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.281195 4895 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.281267 4895 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.281277 4895 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.290136 4895 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.310432 4895 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.310609 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.312219 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.312270 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.312284 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.312483 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.313634 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.313672 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.313682 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.313944 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.313979 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.314001 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.314124 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.314174 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.315198 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.315229 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.315240 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.315381 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.315413 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.315424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.315451 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.315474 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.315388 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.315995 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.316046 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.316060 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.316357 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.316382 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.316436 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.316450 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.316513 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.316529 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.316666 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.316778 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.316814 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.317406 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.317438 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.317447 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.317651 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.317682 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.318345 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.318378 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.318350 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.318411 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.318390 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.318421 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.362394 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" interval="400ms" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.381922 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.383250 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.383368 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.383439 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.383518 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.384024 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.142:6443: connect: connection refused" node="crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390072 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390100 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390124 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390141 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390176 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390193 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390288 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390302 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390347 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390364 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390423 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390489 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390526 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390559 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.390584 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492215 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492277 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492296 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492313 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492330 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492345 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492363 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492386 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492409 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492465 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492415 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492511 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492540 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492474 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492577 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492519 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492515 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492474 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492513 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492598 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492621 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492641 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492661 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492683 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492694 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492739 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.492759 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.493136 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.493156 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.493174 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.584895 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.586222 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.586259 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.586270 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.586321 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.586744 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.142:6443: connect: connection refused" node="crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.642258 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.656631 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.672958 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.679584 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.683115 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:40:59 crc kubenswrapper[4895]: W0129 08:40:59.693689 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ec6d8185c8dada55b128bd855468ff443f70b48c9f31244f1cc152068c770f80 WatchSource:0}: Error finding container ec6d8185c8dada55b128bd855468ff443f70b48c9f31244f1cc152068c770f80: Status 404 returned error can't find the container with id ec6d8185c8dada55b128bd855468ff443f70b48c9f31244f1cc152068c770f80 Jan 29 08:40:59 crc kubenswrapper[4895]: W0129 08:40:59.694632 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-940a8ed679213893ff49b0337619f1485fbd22f1619b0fafa20ae40e6d2c1e69 WatchSource:0}: Error finding container 940a8ed679213893ff49b0337619f1485fbd22f1619b0fafa20ae40e6d2c1e69: Status 404 returned error can't find the container with id 940a8ed679213893ff49b0337619f1485fbd22f1619b0fafa20ae40e6d2c1e69 Jan 29 08:40:59 crc kubenswrapper[4895]: W0129 08:40:59.699792 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-48a6ff2266630e4510b410037cf1769a4395e6a0ffaac19ac05654962e214ae5 WatchSource:0}: Error finding container 48a6ff2266630e4510b410037cf1769a4395e6a0ffaac19ac05654962e214ae5: Status 404 returned error can't find the container with id 48a6ff2266630e4510b410037cf1769a4395e6a0ffaac19ac05654962e214ae5 Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.763794 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" interval="800ms" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.987531 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.988846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.988904 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.988934 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:40:59 crc kubenswrapper[4895]: I0129 08:40:59.988970 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 08:40:59 crc kubenswrapper[4895]: E0129 08:40:59.989611 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.142:6443: connect: connection refused" node="crc" Jan 29 08:41:00 crc kubenswrapper[4895]: W0129 08:41:00.057460 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:41:00 crc kubenswrapper[4895]: E0129 08:41:00.057556 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:41:00 crc kubenswrapper[4895]: I0129 08:41:00.147025 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:40:14.638507699 +0000 UTC Jan 29 08:41:00 crc kubenswrapper[4895]: I0129 08:41:00.147393 4895 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:41:00 crc kubenswrapper[4895]: W0129 08:41:00.211938 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:41:00 crc kubenswrapper[4895]: E0129 08:41:00.212029 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:41:00 crc kubenswrapper[4895]: I0129 08:41:00.214397 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"940a8ed679213893ff49b0337619f1485fbd22f1619b0fafa20ae40e6d2c1e69"} Jan 29 08:41:00 crc kubenswrapper[4895]: I0129 08:41:00.215367 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ec6d8185c8dada55b128bd855468ff443f70b48c9f31244f1cc152068c770f80"} Jan 29 08:41:00 crc kubenswrapper[4895]: I0129 08:41:00.216263 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d5ea269af38202cf7ab02e7744659feee26efb9f4227ee780c3db7080c448172"} Jan 29 08:41:00 crc kubenswrapper[4895]: I0129 08:41:00.216962 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ad16cb15601d71c1939be78466c31a337fa9e997a0959791faf4d1936983e657"} Jan 29 08:41:00 crc kubenswrapper[4895]: I0129 08:41:00.217663 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"48a6ff2266630e4510b410037cf1769a4395e6a0ffaac19ac05654962e214ae5"} Jan 29 08:41:00 crc kubenswrapper[4895]: W0129 08:41:00.471488 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:41:00 crc kubenswrapper[4895]: E0129 08:41:00.471591 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:41:00 crc kubenswrapper[4895]: W0129 08:41:00.472820 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:41:00 crc kubenswrapper[4895]: E0129 08:41:00.472875 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:41:00 crc kubenswrapper[4895]: E0129 08:41:00.564341 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" interval="1.6s" Jan 29 08:41:00 crc kubenswrapper[4895]: I0129 08:41:00.789868 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:00 crc kubenswrapper[4895]: I0129 08:41:00.791299 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:00 crc kubenswrapper[4895]: I0129 08:41:00.791339 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:00 crc kubenswrapper[4895]: I0129 08:41:00.791352 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:00 crc kubenswrapper[4895]: I0129 08:41:00.791381 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 08:41:00 crc kubenswrapper[4895]: E0129 08:41:00.792198 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.142:6443: connect: connection refused" node="crc" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.147159 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 07:59:13.600231066 +0000 UTC Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.147315 4895 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.163886 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 08:41:01 crc kubenswrapper[4895]: E0129 08:41:01.165645 4895 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.222532 4895 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777" exitCode=0 Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.222616 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777"} Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.222654 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.223959 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.224132 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.224162 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.226134 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402"} Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.226166 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596"} Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.226179 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c"} Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.228092 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49" exitCode=0 Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.228216 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.228203 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49"} Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.229353 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.229384 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.229396 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.230493 4895 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c2cc33256e233e38fc4f3c2e8ebb9d6efcbc6eb510d805f7aee528bbe9e93db9" exitCode=0 Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.230576 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.230616 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.230597 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c2cc33256e233e38fc4f3c2e8ebb9d6efcbc6eb510d805f7aee528bbe9e93db9"} Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.231357 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.231387 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.231401 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.231692 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.232089 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.232143 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.233467 4895 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="939df50d8f2af57a1c345e566d2a30d7275289a85184acda779cacc6ae449b11" exitCode=0 Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.233518 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"939df50d8f2af57a1c345e566d2a30d7275289a85184acda779cacc6ae449b11"} Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.233667 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.234528 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.234554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:01 crc kubenswrapper[4895]: I0129 08:41:01.234564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:02 crc kubenswrapper[4895]: W0129 08:41:02.098285 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:41:02 crc kubenswrapper[4895]: E0129 08:41:02.098381 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.147221 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:29:00.097805639 +0000 UTC Jan 29 08:41:02 crc kubenswrapper[4895]: W0129 08:41:02.147392 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.147449 4895 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:41:02 crc kubenswrapper[4895]: E0129 08:41:02.147471 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:41:02 crc kubenswrapper[4895]: E0129 08:41:02.166025 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" interval="3.2s" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.238795 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7efd7a8365fe1196984119afbb2bd6f61289bb5396b77f733275c1e275519b72"} Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.238846 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"dd9854afcfb65e0bf01d43a3c9135ec03936b53449edd9a024f857fea2ab013d"} Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.238861 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c069173f445cfb105a3b28f49ce9bf7793ee448913a03b8b6d1ddbef91daee04"} Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.238885 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.240339 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.240373 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.240383 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.244623 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2"} Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.244672 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.245791 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.245846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.245863 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.249350 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124"} Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.249392 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0"} Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.249406 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855"} Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.249417 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a"} Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.251835 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.251875 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"04125ac8de345b07dd928aff1b21f178375092d856ceddd053c5df653eec03b1"} Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.253377 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.253467 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.253486 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.254733 4895 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="db2dfd3b11a3ce2b031cb22c048cb046e8d0e4215ba1d83bfd15eb8354b448d7" exitCode=0 Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.254766 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"db2dfd3b11a3ce2b031cb22c048cb046e8d0e4215ba1d83bfd15eb8354b448d7"} Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.254861 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.255765 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.255816 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.255826 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.392852 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.393870 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.393899 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.393908 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:02 crc kubenswrapper[4895]: I0129 08:41:02.393943 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 08:41:02 crc kubenswrapper[4895]: E0129 08:41:02.394347 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.142:6443: connect: connection refused" node="crc" Jan 29 08:41:03 crc kubenswrapper[4895]: W0129 08:41:03.132809 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:41:03 crc kubenswrapper[4895]: E0129 08:41:03.132994 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.142:6443: connect: connection refused" logger="UnhandledError" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.146882 4895 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.142:6443: connect: connection refused Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.147865 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:14:47.549157705 +0000 UTC Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.260706 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730"} Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.260871 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.261542 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.261571 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.261583 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.265554 4895 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="391f2da0887a1491456cefdfa1bb3880e0f0b3c06bddf95228f0cbe3772f4048" exitCode=0 Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.265728 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.265730 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"391f2da0887a1491456cefdfa1bb3880e0f0b3c06bddf95228f0cbe3772f4048"} Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.265768 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.265846 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.265851 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.265907 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.266634 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.266669 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.266684 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.267373 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.267416 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.267434 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.267437 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.267519 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.267529 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.268641 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.268665 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.268674 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.318228 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.318439 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 29 08:41:03 crc kubenswrapper[4895]: I0129 08:41:03.318503 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 29 08:41:04 crc kubenswrapper[4895]: I0129 08:41:04.148659 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:35:00.750430911 +0000 UTC Jan 29 08:41:04 crc kubenswrapper[4895]: I0129 08:41:04.271562 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4cf1f4153a1ca3ce7edc92948da669fc7a3b41133408d15cdbd296a2a0d795f0"} Jan 29 08:41:04 crc kubenswrapper[4895]: I0129 08:41:04.271617 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5104636743cc3e717716c119c09ac74f7ff52adb8f1696c8bfe6dbf58701003a"} Jan 29 08:41:04 crc kubenswrapper[4895]: I0129 08:41:04.271631 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:04 crc kubenswrapper[4895]: I0129 08:41:04.271635 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"748c6dbb30239a1269eecb2bdb4046d36db9f174f6ca07eb40c2d98fc24e35ac"} Jan 29 08:41:04 crc kubenswrapper[4895]: I0129 08:41:04.271747 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:04 crc kubenswrapper[4895]: I0129 08:41:04.271749 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:41:04 crc kubenswrapper[4895]: I0129 08:41:04.271848 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c2197e8b93ef1cb6d27c9333927331fbbe831262073e7376fb5c126345c4ad63"} Jan 29 08:41:04 crc kubenswrapper[4895]: I0129 08:41:04.272526 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:04 crc kubenswrapper[4895]: I0129 08:41:04.272557 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:04 crc kubenswrapper[4895]: I0129 08:41:04.272567 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:04 crc kubenswrapper[4895]: I0129 08:41:04.272777 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:04 crc kubenswrapper[4895]: I0129 08:41:04.272813 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:04 crc kubenswrapper[4895]: I0129 08:41:04.272825 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.149704 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:31:23.030953542 +0000 UTC Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.271282 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.279310 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a5abfb879b44e7b20c49a168d7f9db5e9ea8c16e8564ccf6696218bd1d7a33ed"} Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.279425 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.279472 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.281062 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.281080 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.281099 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.281111 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.281111 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.281567 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.515027 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.594466 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.596029 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.596069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.596079 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:05 crc kubenswrapper[4895]: I0129 08:41:05.596104 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 08:41:06 crc kubenswrapper[4895]: I0129 08:41:06.150021 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:04:44.835914762 +0000 UTC Jan 29 08:41:06 crc kubenswrapper[4895]: I0129 08:41:06.281345 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:06 crc kubenswrapper[4895]: I0129 08:41:06.282297 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:06 crc kubenswrapper[4895]: I0129 08:41:06.282341 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:06 crc kubenswrapper[4895]: I0129 08:41:06.282351 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:07 crc kubenswrapper[4895]: I0129 08:41:07.020256 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 08:41:07 crc kubenswrapper[4895]: I0129 08:41:07.150683 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 17:29:07.962032027 +0000 UTC Jan 29 08:41:07 crc kubenswrapper[4895]: I0129 08:41:07.284019 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:07 crc kubenswrapper[4895]: I0129 08:41:07.285025 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:07 crc kubenswrapper[4895]: I0129 08:41:07.285096 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:07 crc kubenswrapper[4895]: I0129 08:41:07.285120 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:07 crc kubenswrapper[4895]: I0129 08:41:07.718887 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:41:07 crc kubenswrapper[4895]: I0129 08:41:07.719104 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:07 crc kubenswrapper[4895]: I0129 08:41:07.720396 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:07 crc kubenswrapper[4895]: I0129 08:41:07.720436 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:07 crc kubenswrapper[4895]: I0129 08:41:07.720445 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.058470 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.058764 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.060253 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.060309 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.060333 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.151204 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 13:23:28.548249951 +0000 UTC Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.285986 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.286795 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.286833 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.286844 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.594735 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.594895 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.596699 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.596740 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.596754 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:08 crc kubenswrapper[4895]: I0129 08:41:08.601489 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:41:09 crc kubenswrapper[4895]: I0129 08:41:09.151987 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 12:00:42.458766387 +0000 UTC Jan 29 08:41:09 crc kubenswrapper[4895]: I0129 08:41:09.287245 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:09 crc kubenswrapper[4895]: I0129 08:41:09.287811 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:41:09 crc kubenswrapper[4895]: I0129 08:41:09.288264 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:09 crc kubenswrapper[4895]: I0129 08:41:09.288298 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:09 crc kubenswrapper[4895]: I0129 08:41:09.288309 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:09 crc kubenswrapper[4895]: E0129 08:41:09.290551 4895 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 08:41:10 crc kubenswrapper[4895]: I0129 08:41:10.152865 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:31:18.067781124 +0000 UTC Jan 29 08:41:10 crc kubenswrapper[4895]: I0129 08:41:10.290605 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:10 crc kubenswrapper[4895]: I0129 08:41:10.292575 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:10 crc kubenswrapper[4895]: I0129 08:41:10.292635 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:10 crc kubenswrapper[4895]: I0129 08:41:10.292658 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:10 crc kubenswrapper[4895]: I0129 08:41:10.298975 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:41:11 crc kubenswrapper[4895]: I0129 08:41:11.058813 4895 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 08:41:11 crc kubenswrapper[4895]: I0129 08:41:11.058890 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:41:11 crc kubenswrapper[4895]: I0129 08:41:11.093035 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:41:11 crc kubenswrapper[4895]: I0129 08:41:11.153627 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 03:07:26.353273527 +0000 UTC Jan 29 08:41:11 crc kubenswrapper[4895]: I0129 08:41:11.292817 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:11 crc kubenswrapper[4895]: I0129 08:41:11.293878 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:11 crc kubenswrapper[4895]: I0129 08:41:11.293959 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:11 crc kubenswrapper[4895]: I0129 08:41:11.293975 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:12 crc kubenswrapper[4895]: I0129 08:41:12.154218 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:41:53.383581191 +0000 UTC Jan 29 08:41:12 crc kubenswrapper[4895]: I0129 08:41:12.295014 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:12 crc kubenswrapper[4895]: I0129 08:41:12.296106 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:12 crc kubenswrapper[4895]: I0129 08:41:12.296139 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:12 crc kubenswrapper[4895]: I0129 08:41:12.296150 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:12 crc kubenswrapper[4895]: I0129 08:41:12.784146 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 08:41:12 crc kubenswrapper[4895]: I0129 08:41:12.784216 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 08:41:13 crc kubenswrapper[4895]: I0129 08:41:13.086437 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 08:41:13 crc kubenswrapper[4895]: I0129 08:41:13.086490 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 08:41:13 crc kubenswrapper[4895]: I0129 08:41:13.154418 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:38:39.199847963 +0000 UTC Jan 29 08:41:13 crc kubenswrapper[4895]: W0129 08:41:13.380611 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 08:41:13 crc kubenswrapper[4895]: I0129 08:41:13.380729 4895 trace.go:236] Trace[1018873150]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 08:41:03.379) (total time: 10001ms): Jan 29 08:41:13 crc kubenswrapper[4895]: Trace[1018873150]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:41:13.380) Jan 29 08:41:13 crc kubenswrapper[4895]: Trace[1018873150]: [10.001563307s] [10.001563307s] END Jan 29 08:41:13 crc kubenswrapper[4895]: E0129 08:41:13.380760 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 08:41:14 crc kubenswrapper[4895]: I0129 08:41:14.142978 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 08:41:14 crc kubenswrapper[4895]: I0129 08:41:14.143096 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 08:41:14 crc kubenswrapper[4895]: I0129 08:41:14.148119 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 08:41:14 crc kubenswrapper[4895]: I0129 08:41:14.148213 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 08:41:14 crc kubenswrapper[4895]: I0129 08:41:14.155105 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:48:22.285265672 +0000 UTC Jan 29 08:41:15 crc kubenswrapper[4895]: I0129 08:41:15.155757 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:06:41.512537165 +0000 UTC Jan 29 08:41:16 crc kubenswrapper[4895]: I0129 08:41:16.156462 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:07:15.669104098 +0000 UTC Jan 29 08:41:17 crc kubenswrapper[4895]: I0129 08:41:17.048331 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 08:41:17 crc kubenswrapper[4895]: I0129 08:41:17.048541 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:17 crc kubenswrapper[4895]: I0129 08:41:17.049715 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:17 crc kubenswrapper[4895]: I0129 08:41:17.049759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:17 crc kubenswrapper[4895]: I0129 08:41:17.049771 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:17 crc kubenswrapper[4895]: I0129 08:41:17.063370 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 08:41:17 crc kubenswrapper[4895]: I0129 08:41:17.157420 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:32:32.11599996 +0000 UTC Jan 29 08:41:17 crc kubenswrapper[4895]: I0129 08:41:17.306044 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:17 crc kubenswrapper[4895]: I0129 08:41:17.307292 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:17 crc kubenswrapper[4895]: I0129 08:41:17.307361 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:17 crc kubenswrapper[4895]: I0129 08:41:17.307380 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:18 crc kubenswrapper[4895]: I0129 08:41:18.157824 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 15:36:51.850696926 +0000 UTC Jan 29 08:41:18 crc kubenswrapper[4895]: I0129 08:41:18.327463 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:41:18 crc kubenswrapper[4895]: I0129 08:41:18.327780 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:18 crc kubenswrapper[4895]: I0129 08:41:18.329154 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:18 crc kubenswrapper[4895]: I0129 08:41:18.329330 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:18 crc kubenswrapper[4895]: I0129 08:41:18.329425 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:18 crc kubenswrapper[4895]: I0129 08:41:18.332535 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:41:19 crc kubenswrapper[4895]: E0129 08:41:19.133413 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 29 08:41:19 crc kubenswrapper[4895]: E0129 08:41:19.144362 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.146104 4895 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.146298 4895 trace.go:236] Trace[350414832]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 08:41:06.385) (total time: 12760ms): Jan 29 08:41:19 crc kubenswrapper[4895]: Trace[350414832]: ---"Objects listed" error: 12760ms (08:41:19.146) Jan 29 08:41:19 crc kubenswrapper[4895]: Trace[350414832]: [12.760603378s] [12.760603378s] END Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.146328 4895 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.146507 4895 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.147354 4895 trace.go:236] Trace[1899807629]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 08:41:08.536) (total time: 10610ms): Jan 29 08:41:19 crc kubenswrapper[4895]: Trace[1899807629]: ---"Objects listed" error: 10610ms (08:41:19.146) Jan 29 08:41:19 crc kubenswrapper[4895]: Trace[1899807629]: [10.610756243s] [10.610756243s] END Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.147401 4895 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.151570 4895 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.158695 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:24:48.157103299 +0000 UTC Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.216743 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44836->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.216805 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44836->192.168.126.11:17697: read: connection reset by peer" Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.312262 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.312333 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.677151 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.681340 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:41:19 crc kubenswrapper[4895]: I0129 08:41:19.776070 4895 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.144114 4895 apiserver.go:52] "Watching apiserver" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.146789 4895 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.147169 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.147539 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.147620 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.147651 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.147664 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.147887 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.148321 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.148339 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.148388 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.148416 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.150253 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.150341 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.150377 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.150393 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.150350 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.150675 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.150974 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.150911 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.151725 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154092 4895 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154593 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154634 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154663 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154721 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154746 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154770 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154795 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154817 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154844 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154869 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154891 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154932 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154959 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154983 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.154981 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155003 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155024 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155046 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155043 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155071 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155097 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155123 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155145 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155166 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155191 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155216 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155239 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155260 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155272 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155282 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155346 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155376 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155402 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155384 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155417 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155507 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156106 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155510 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155621 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155646 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155667 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155670 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155821 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156205 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155426 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156290 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156326 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156347 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156365 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156401 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156637 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156660 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156678 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156699 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156721 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156740 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156757 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156789 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156815 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156849 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156866 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156889 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156949 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156970 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157017 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157156 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157193 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157215 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157250 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156214 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156210 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155879 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155892 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157293 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156023 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156053 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.155871 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.156872 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157168 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157174 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157248 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157287 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157462 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157493 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157518 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157545 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157559 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157591 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157620 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157647 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157706 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157709 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157703 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157766 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157764 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157857 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157889 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157949 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157952 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157974 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158004 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158026 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158045 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158069 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158089 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158108 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158125 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158148 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158176 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158203 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158227 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158248 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158271 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158288 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158308 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158330 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158354 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158383 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158402 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158425 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158463 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158492 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158519 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158539 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158590 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158614 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158637 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158661 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158723 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158751 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158779 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158803 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.157969 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158220 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158257 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158316 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158327 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158402 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158463 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.166451 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158533 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.158642 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.160293 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.160345 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.160447 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.162639 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.163197 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.164999 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.165194 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.165255 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:39:08.224439764 +0000 UTC Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.165651 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.165665 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.166379 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.166626 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.166782 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.167483 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.167549 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.168053 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.168099 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.168978 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.169053 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.172664 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.172994 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.173075 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.173102 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.173211 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.173292 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.173302 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.173566 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.173585 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.173754 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.173773 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.173848 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.174170 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.174459 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.175243 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.175789 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.176442 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.176971 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.177187 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.177621 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.177849 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.177950 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.178080 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.178130 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.178273 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.178157 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.178377 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.178403 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.178454 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.178487 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.178610 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.178639 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.181855 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.181940 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.181965 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.181988 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182010 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182032 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182057 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182088 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182121 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182148 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182173 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182197 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182215 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182233 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182255 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182277 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182299 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182318 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182338 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182356 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182375 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182397 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182418 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182441 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182464 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182485 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182502 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182521 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182547 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182567 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182586 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182608 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182625 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182648 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182705 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182737 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182767 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182791 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182814 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182834 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182858 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182885 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182906 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182941 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182961 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.182985 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183004 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183023 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183043 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183065 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183090 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183111 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183131 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183149 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183167 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183189 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183209 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183231 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183258 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183279 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183300 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183323 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183341 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183362 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183383 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183401 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183423 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183444 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183461 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183482 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183502 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183522 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183540 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183559 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183614 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183719 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183742 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183797 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183871 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183887 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.183977 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.184042 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.184091 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.184121 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.184284 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.184338 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.184435 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.184472 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.184627 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.184700 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.185355 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.188486 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.188573 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.170255 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.189050 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.189206 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.190766 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.190887 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.191116 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.191224 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.191298 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.191564 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.184759 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.191963 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.192097 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.192817 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.196089 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.196401 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.196889 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.197101 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.197823 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.185031 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:20.684806603 +0000 UTC m=+22.326314749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.201029 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.201076 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.201132 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.206502 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.206746 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.206930 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.207237 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.207498 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.207499 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.207668 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.207812 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.207878 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.207955 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.208029 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.208090 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.208117 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.208158 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.208486 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:20.708468526 +0000 UTC m=+22.349976672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.208633 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.208673 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.208893 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.209315 4895 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.209626 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.210551 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.210824 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.211934 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.211993 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.212033 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.212081 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.212376 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.212899 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:20.71287293 +0000 UTC m=+22.354381076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213086 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213107 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213117 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213128 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213210 4895 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213222 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213232 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213244 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213254 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213315 4895 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213329 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213339 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213349 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213358 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213367 4895 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213660 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213675 4895 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213684 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213694 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213704 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213713 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213723 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213732 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213742 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213751 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213759 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213768 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213777 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213785 4895 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213796 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213805 4895 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213813 4895 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213823 4895 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213831 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213840 4895 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213849 4895 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213857 4895 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213866 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213875 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213884 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213893 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213901 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213910 4895 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213939 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213950 4895 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213959 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213968 4895 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213976 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213985 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.213994 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214004 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214012 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214021 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214029 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214040 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214048 4895 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214058 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214067 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214076 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214085 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214272 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214390 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214408 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214418 4895 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214428 4895 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214439 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214448 4895 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214457 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214466 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214475 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214484 4895 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214496 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214507 4895 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214518 4895 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214529 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214543 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214555 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214571 4895 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214582 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214594 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214604 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214615 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214624 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214632 4895 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214640 4895 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214649 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214674 4895 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214685 4895 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214693 4895 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214701 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214710 4895 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214780 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214788 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214836 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214859 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214877 4895 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214891 4895 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214903 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214935 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214953 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.214968 4895 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.215023 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.215200 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.215626 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.215800 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.216135 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.216304 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.216568 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.216710 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.219176 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.220426 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.220555 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.222031 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.222442 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.222563 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.222644 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.222817 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:20.722791446 +0000 UTC m=+22.364299582 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.223821 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.224026 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.224049 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.224063 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.224100 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:20.724089907 +0000 UTC m=+22.365598053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.224480 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.224594 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.224848 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.225151 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.225422 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.225628 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.225836 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.225959 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.226227 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.226547 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.226864 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.227075 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.197977 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.227191 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.227488 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.227472 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.227787 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.228077 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.228377 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.228734 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.229534 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.192394 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.229688 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.188886 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.230385 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.230571 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.230839 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.231112 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.231194 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.231461 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.231794 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.231994 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.232195 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.232467 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.232651 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.233460 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.233512 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.233973 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.234314 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.234382 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.234401 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.234766 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.235192 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.235268 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.235503 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.235670 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.235983 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.241087 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.241726 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.242356 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.242578 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.242722 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.243569 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.243831 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.243992 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.244529 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.244959 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.245064 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.248221 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.248410 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.249160 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.249225 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.252180 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.261623 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.261754 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.262491 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.266061 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.275207 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.280490 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.280714 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.301399 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315350 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315485 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315542 4895 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315557 4895 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315570 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315583 4895 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315585 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315596 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315609 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315620 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315631 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315643 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315654 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315669 4895 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315681 4895 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315693 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315704 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315715 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315726 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315736 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315746 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315757 4895 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315768 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315779 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315790 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315801 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315817 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315828 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315840 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315852 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315863 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315875 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315886 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315897 4895 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315927 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315939 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315951 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315964 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315974 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315985 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.315995 4895 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316006 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316016 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316030 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316041 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316052 4895 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316088 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316103 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316142 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316155 4895 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316166 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316177 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316189 4895 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316266 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316307 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316321 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316336 4895 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316350 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316363 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316383 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316397 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316410 4895 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316424 4895 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316447 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316461 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316475 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316489 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316541 4895 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316555 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316567 4895 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316580 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316692 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316718 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316736 4895 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316752 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316774 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316827 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316848 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.316974 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317016 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317031 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317043 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317056 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317070 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317082 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317096 4895 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317112 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317127 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317139 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317209 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317223 4895 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317235 4895 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317250 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317263 4895 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317276 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317292 4895 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317305 4895 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.317323 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.318301 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.320982 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.321493 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730" exitCode=255 Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.321864 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730"} Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.322679 4895 scope.go:117] "RemoveContainer" containerID="4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730" Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.329905 4895 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.355527 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.375000 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.391558 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.405006 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.424022 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.440471 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.457214 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.471851 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.476056 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.487570 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: W0129 08:41:20.489952 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-4ecb438465d0f51d061e8af2c95ad2f69e6e8abef72c736f681da8b34635c3bf WatchSource:0}: Error finding container 4ecb438465d0f51d061e8af2c95ad2f69e6e8abef72c736f681da8b34635c3bf: Status 404 returned error can't find the container with id 4ecb438465d0f51d061e8af2c95ad2f69e6e8abef72c736f681da8b34635c3bf Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.502037 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.517390 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.529811 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.533009 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: W0129 08:41:20.540846 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-9e01ae91c7416ce8c497bbef90a8f44f63b21eb15e3880ae4bf452ef636f5765 WatchSource:0}: Error finding container 9e01ae91c7416ce8c497bbef90a8f44f63b21eb15e3880ae4bf452ef636f5765: Status 404 returned error can't find the container with id 9e01ae91c7416ce8c497bbef90a8f44f63b21eb15e3880ae4bf452ef636f5765 Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.546428 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.547449 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.720373 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.720449 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.720482 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.720519 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:21.720501448 +0000 UTC m=+23.362009594 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.720546 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.720562 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.720590 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:21.72058265 +0000 UTC m=+23.362090796 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.720602 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:21.72059705 +0000 UTC m=+23.362105196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.821537 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:20 crc kubenswrapper[4895]: I0129 08:41:20.821908 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.821747 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.821984 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.822005 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.822016 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.822068 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:21.822051309 +0000 UTC m=+23.463559515 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.821985 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.822104 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:20 crc kubenswrapper[4895]: E0129 08:41:20.822199 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:21.822181103 +0000 UTC m=+23.463689329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.167992 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:55:35.334919335 +0000 UTC Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.217155 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.217938 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.219189 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.219848 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.220841 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.221416 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.222076 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.223088 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.223693 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.224764 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.225320 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.226450 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.227085 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.227659 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.228695 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.229247 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.230212 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.230640 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.231253 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.232365 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.232823 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.233776 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.234267 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.235423 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.235823 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.236432 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.237990 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.238478 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.239600 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.240222 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.241159 4895 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.241264 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.243125 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.244098 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.244504 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.246120 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.246824 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.247847 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.248545 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.249751 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.250316 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.251425 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.252301 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.253303 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.253734 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.254837 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.255379 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.256655 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.257355 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.258524 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.259298 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.262342 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.263316 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.263813 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.327011 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.328891 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a"} Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.330313 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.331651 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"82761fe40c073e1f29e2ce45f4377a025c6c4fa4259835db180080fe9a35fcb7"} Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.333522 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b"} Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.333587 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a"} Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.333606 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9e01ae91c7416ce8c497bbef90a8f44f63b21eb15e3880ae4bf452ef636f5765"} Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.335616 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f"} Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.335645 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4ecb438465d0f51d061e8af2c95ad2f69e6e8abef72c736f681da8b34635c3bf"} Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.346209 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.360878 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.375437 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.390137 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.404179 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.415538 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.428521 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.440732 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.453135 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.467633 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.480372 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.492769 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.505642 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.518388 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.532806 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.545651 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.730848 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.730970 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.731016 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:21 crc kubenswrapper[4895]: E0129 08:41:21.731108 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:23.731083111 +0000 UTC m=+25.372591247 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:21 crc kubenswrapper[4895]: E0129 08:41:21.731174 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:41:21 crc kubenswrapper[4895]: E0129 08:41:21.731213 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:41:21 crc kubenswrapper[4895]: E0129 08:41:21.731342 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:23.731302176 +0000 UTC m=+25.372810362 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:41:21 crc kubenswrapper[4895]: E0129 08:41:21.731396 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:23.731374668 +0000 UTC m=+25.372882974 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.832227 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:21 crc kubenswrapper[4895]: I0129 08:41:21.832276 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:21 crc kubenswrapper[4895]: E0129 08:41:21.832416 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:41:21 crc kubenswrapper[4895]: E0129 08:41:21.832429 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:41:21 crc kubenswrapper[4895]: E0129 08:41:21.832439 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:21 crc kubenswrapper[4895]: E0129 08:41:21.832478 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:23.832466209 +0000 UTC m=+25.473974345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:21 crc kubenswrapper[4895]: E0129 08:41:21.832754 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:41:21 crc kubenswrapper[4895]: E0129 08:41:21.832769 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:41:21 crc kubenswrapper[4895]: E0129 08:41:21.832776 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:21 crc kubenswrapper[4895]: E0129 08:41:21.832799 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:23.832792346 +0000 UTC m=+25.474300492 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:22 crc kubenswrapper[4895]: I0129 08:41:22.168460 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:58:19.098592576 +0000 UTC Jan 29 08:41:22 crc kubenswrapper[4895]: I0129 08:41:22.210349 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:22 crc kubenswrapper[4895]: I0129 08:41:22.210381 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:22 crc kubenswrapper[4895]: I0129 08:41:22.210413 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:22 crc kubenswrapper[4895]: E0129 08:41:22.210470 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:22 crc kubenswrapper[4895]: E0129 08:41:22.210529 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:22 crc kubenswrapper[4895]: E0129 08:41:22.210598 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:23 crc kubenswrapper[4895]: I0129 08:41:23.169413 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:55:15.049569922 +0000 UTC Jan 29 08:41:23 crc kubenswrapper[4895]: E0129 08:41:23.797152 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:27.797125604 +0000 UTC m=+29.438633760 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:23 crc kubenswrapper[4895]: I0129 08:41:23.797019 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:23 crc kubenswrapper[4895]: I0129 08:41:23.797276 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:23 crc kubenswrapper[4895]: E0129 08:41:23.797394 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:41:23 crc kubenswrapper[4895]: E0129 08:41:23.797448 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:27.797436202 +0000 UTC m=+29.438944368 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:41:23 crc kubenswrapper[4895]: I0129 08:41:23.797966 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:23 crc kubenswrapper[4895]: E0129 08:41:23.798120 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:41:23 crc kubenswrapper[4895]: E0129 08:41:23.798180 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:27.798164829 +0000 UTC m=+29.439672985 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:41:23 crc kubenswrapper[4895]: E0129 08:41:23.899396 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:41:23 crc kubenswrapper[4895]: E0129 08:41:23.899444 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:41:23 crc kubenswrapper[4895]: E0129 08:41:23.899462 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:23 crc kubenswrapper[4895]: E0129 08:41:23.899534 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:27.899512186 +0000 UTC m=+29.541020352 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:23 crc kubenswrapper[4895]: I0129 08:41:23.899183 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:23 crc kubenswrapper[4895]: I0129 08:41:23.900131 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:23 crc kubenswrapper[4895]: E0129 08:41:23.900254 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:41:23 crc kubenswrapper[4895]: E0129 08:41:23.900273 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:41:23 crc kubenswrapper[4895]: E0129 08:41:23.900284 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:23 crc kubenswrapper[4895]: E0129 08:41:23.900334 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:27.900316055 +0000 UTC m=+29.541824221 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.170521 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 05:28:44.336947699 +0000 UTC Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.210819 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.210897 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.210999 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:24 crc kubenswrapper[4895]: E0129 08:41:24.211092 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:24 crc kubenswrapper[4895]: E0129 08:41:24.211172 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:24 crc kubenswrapper[4895]: E0129 08:41:24.211327 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.345680 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b"} Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.360310 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.377491 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.391543 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.410324 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.425137 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.440765 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.463375 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.482505 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.621275 4895 csr.go:261] certificate signing request csr-5slx8 is approved, waiting to be issued Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.715040 4895 csr.go:257] certificate signing request csr-5slx8 is issued Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.820930 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-wfxqf"] Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.821425 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wfxqf" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.823266 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.823290 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.826178 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.842707 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.856112 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.869386 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.882947 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.898071 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.910840 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/da81d90f-1b31-410e-8de7-2f5d25b99a34-hosts-file\") pod \"node-resolver-wfxqf\" (UID: \"da81d90f-1b31-410e-8de7-2f5d25b99a34\") " pod="openshift-dns/node-resolver-wfxqf" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.910953 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr449\" (UniqueName: \"kubernetes.io/projected/da81d90f-1b31-410e-8de7-2f5d25b99a34-kube-api-access-gr449\") pod \"node-resolver-wfxqf\" (UID: \"da81d90f-1b31-410e-8de7-2f5d25b99a34\") " pod="openshift-dns/node-resolver-wfxqf" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.914643 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.929003 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.948337 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:24 crc kubenswrapper[4895]: I0129 08:41:24.967589 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.011892 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/da81d90f-1b31-410e-8de7-2f5d25b99a34-hosts-file\") pod \"node-resolver-wfxqf\" (UID: \"da81d90f-1b31-410e-8de7-2f5d25b99a34\") " pod="openshift-dns/node-resolver-wfxqf" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.012132 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/da81d90f-1b31-410e-8de7-2f5d25b99a34-hosts-file\") pod \"node-resolver-wfxqf\" (UID: \"da81d90f-1b31-410e-8de7-2f5d25b99a34\") " pod="openshift-dns/node-resolver-wfxqf" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.012418 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr449\" (UniqueName: \"kubernetes.io/projected/da81d90f-1b31-410e-8de7-2f5d25b99a34-kube-api-access-gr449\") pod \"node-resolver-wfxqf\" (UID: \"da81d90f-1b31-410e-8de7-2f5d25b99a34\") " pod="openshift-dns/node-resolver-wfxqf" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.038083 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr449\" (UniqueName: \"kubernetes.io/projected/da81d90f-1b31-410e-8de7-2f5d25b99a34-kube-api-access-gr449\") pod \"node-resolver-wfxqf\" (UID: \"da81d90f-1b31-410e-8de7-2f5d25b99a34\") " pod="openshift-dns/node-resolver-wfxqf" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.134212 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wfxqf" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.171603 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 16:31:13.294442681 +0000 UTC Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.221805 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-z82hk"] Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.222239 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-b4dgj"] Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.222409 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-7j8rs"] Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.222638 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.222770 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.223711 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.228529 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.228881 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.229706 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.230038 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.230264 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.230512 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.239363 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.240090 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.240390 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.240648 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.240670 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.240828 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.267364 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.286534 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.304360 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.318668 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/be953ef9-0feb-4327-ba58-0e29287bab39-system-cni-dir\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.318718 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-multus-conf-dir\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.318743 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-run-multus-certs\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.318766 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-cnibin\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.318785 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/69ba7dcf-e7a0-4408-983b-09a07851d01c-multus-daemon-config\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.318827 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4a4bd95-f02a-4617-9aa4-febfa6bee92b-mcd-auth-proxy-config\") pod \"machine-config-daemon-z82hk\" (UID: \"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\") " pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.318855 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a4a4bd95-f02a-4617-9aa4-febfa6bee92b-rootfs\") pod \"machine-config-daemon-z82hk\" (UID: \"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\") " pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.318883 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-multus-socket-dir-parent\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.318901 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/be953ef9-0feb-4327-ba58-0e29287bab39-cnibin\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.318939 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-system-cni-dir\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.318959 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-etc-kubernetes\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.318981 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/be953ef9-0feb-4327-ba58-0e29287bab39-cni-binary-copy\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319010 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9dx9\" (UniqueName: \"kubernetes.io/projected/a4a4bd95-f02a-4617-9aa4-febfa6bee92b-kube-api-access-h9dx9\") pod \"machine-config-daemon-z82hk\" (UID: \"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\") " pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319030 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/be953ef9-0feb-4327-ba58-0e29287bab39-os-release\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319052 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4a4bd95-f02a-4617-9aa4-febfa6bee92b-proxy-tls\") pod \"machine-config-daemon-z82hk\" (UID: \"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\") " pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319071 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-hostroot\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319090 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-multus-cni-dir\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319111 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/69ba7dcf-e7a0-4408-983b-09a07851d01c-cni-binary-copy\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319131 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-run-netns\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319150 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-var-lib-cni-multus\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319175 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/be953ef9-0feb-4327-ba58-0e29287bab39-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319217 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-run-k8s-cni-cncf-io\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319235 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-var-lib-kubelet\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319259 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/be953ef9-0feb-4327-ba58-0e29287bab39-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319279 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-var-lib-cni-bin\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319299 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-os-release\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319321 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn4w2\" (UniqueName: \"kubernetes.io/projected/69ba7dcf-e7a0-4408-983b-09a07851d01c-kube-api-access-vn4w2\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.319459 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rkk5\" (UniqueName: \"kubernetes.io/projected/be953ef9-0feb-4327-ba58-0e29287bab39-kube-api-access-9rkk5\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.330529 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.345890 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.350299 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wfxqf" event={"ID":"da81d90f-1b31-410e-8de7-2f5d25b99a34","Type":"ContainerStarted","Data":"584872a69493f2f26700a73b27c9cf09330fa70eee1e05be39f2332cd13b467b"} Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.367036 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.381334 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.397373 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.417224 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.419737 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-cnibin\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.419781 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-run-multus-certs\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.419814 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4a4bd95-f02a-4617-9aa4-febfa6bee92b-mcd-auth-proxy-config\") pod \"machine-config-daemon-z82hk\" (UID: \"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\") " pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.419836 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/69ba7dcf-e7a0-4408-983b-09a07851d01c-multus-daemon-config\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.419855 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a4a4bd95-f02a-4617-9aa4-febfa6bee92b-rootfs\") pod \"machine-config-daemon-z82hk\" (UID: \"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\") " pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.419864 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-cnibin\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420059 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a4a4bd95-f02a-4617-9aa4-febfa6bee92b-rootfs\") pod \"machine-config-daemon-z82hk\" (UID: \"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\") " pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420110 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-run-multus-certs\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.419873 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-multus-socket-dir-parent\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420083 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-multus-socket-dir-parent\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420232 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/be953ef9-0feb-4327-ba58-0e29287bab39-cnibin\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420320 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9dx9\" (UniqueName: \"kubernetes.io/projected/a4a4bd95-f02a-4617-9aa4-febfa6bee92b-kube-api-access-h9dx9\") pod \"machine-config-daemon-z82hk\" (UID: \"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\") " pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420347 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/be953ef9-0feb-4327-ba58-0e29287bab39-cnibin\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420350 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-system-cni-dir\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420404 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-etc-kubernetes\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420418 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-system-cni-dir\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420427 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/be953ef9-0feb-4327-ba58-0e29287bab39-cni-binary-copy\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420479 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/be953ef9-0feb-4327-ba58-0e29287bab39-os-release\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420514 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4a4bd95-f02a-4617-9aa4-febfa6bee92b-proxy-tls\") pod \"machine-config-daemon-z82hk\" (UID: \"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\") " pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420539 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-hostroot\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420566 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-multus-cni-dir\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420590 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/69ba7dcf-e7a0-4408-983b-09a07851d01c-cni-binary-copy\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420648 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-run-netns\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420682 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-var-lib-cni-multus\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420708 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/be953ef9-0feb-4327-ba58-0e29287bab39-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420747 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-run-k8s-cni-cncf-io\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420772 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-var-lib-kubelet\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420818 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/be953ef9-0feb-4327-ba58-0e29287bab39-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420851 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-var-lib-cni-bin\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420884 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-os-release\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420885 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/69ba7dcf-e7a0-4408-983b-09a07851d01c-multus-daemon-config\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420912 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn4w2\" (UniqueName: \"kubernetes.io/projected/69ba7dcf-e7a0-4408-983b-09a07851d01c-kube-api-access-vn4w2\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420967 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4a4bd95-f02a-4617-9aa4-febfa6bee92b-mcd-auth-proxy-config\") pod \"machine-config-daemon-z82hk\" (UID: \"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\") " pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.421025 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-var-lib-cni-multus\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420977 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-run-netns\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.421168 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/be953ef9-0feb-4327-ba58-0e29287bab39-cni-binary-copy\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.421209 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-etc-kubernetes\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.420970 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rkk5\" (UniqueName: \"kubernetes.io/projected/be953ef9-0feb-4327-ba58-0e29287bab39-kube-api-access-9rkk5\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.421487 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/be953ef9-0feb-4327-ba58-0e29287bab39-system-cni-dir\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.421523 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-multus-conf-dir\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.421547 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-run-k8s-cni-cncf-io\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.421598 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-multus-conf-dir\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.421668 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/be953ef9-0feb-4327-ba58-0e29287bab39-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.421679 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-var-lib-kubelet\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.421724 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-host-var-lib-cni-bin\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.421799 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/be953ef9-0feb-4327-ba58-0e29287bab39-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.421853 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/be953ef9-0feb-4327-ba58-0e29287bab39-system-cni-dir\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.421863 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-hostroot\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.422029 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-multus-cni-dir\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.422131 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/be953ef9-0feb-4327-ba58-0e29287bab39-os-release\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.422243 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/69ba7dcf-e7a0-4408-983b-09a07851d01c-os-release\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.422524 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/69ba7dcf-e7a0-4408-983b-09a07851d01c-cni-binary-copy\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.431442 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4a4bd95-f02a-4617-9aa4-febfa6bee92b-proxy-tls\") pod \"machine-config-daemon-z82hk\" (UID: \"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\") " pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.439716 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.440010 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn4w2\" (UniqueName: \"kubernetes.io/projected/69ba7dcf-e7a0-4408-983b-09a07851d01c-kube-api-access-vn4w2\") pod \"multus-b4dgj\" (UID: \"69ba7dcf-e7a0-4408-983b-09a07851d01c\") " pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.442529 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9dx9\" (UniqueName: \"kubernetes.io/projected/a4a4bd95-f02a-4617-9aa4-febfa6bee92b-kube-api-access-h9dx9\") pod \"machine-config-daemon-z82hk\" (UID: \"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\") " pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.444052 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rkk5\" (UniqueName: \"kubernetes.io/projected/be953ef9-0feb-4327-ba58-0e29287bab39-kube-api-access-9rkk5\") pod \"multus-additional-cni-plugins-7j8rs\" (UID: \"be953ef9-0feb-4327-ba58-0e29287bab39\") " pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.457081 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.479497 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.498263 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.524285 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.542401 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.544493 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.546246 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.546305 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.546322 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.546473 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.555499 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.561163 4895 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.561293 4895 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.562716 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.562868 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.562964 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.563065 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.563352 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.563195 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:25Z","lastTransitionTime":"2026-01-29T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.570772 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.574586 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-b4dgj" Jan 29 08:41:25 crc kubenswrapper[4895]: E0129 08:41:25.588692 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.595405 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.595451 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.595462 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.595481 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.595493 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:25Z","lastTransitionTime":"2026-01-29T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.599830 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: E0129 08:41:25.636556 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.639641 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.652425 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.652493 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.652508 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.652534 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.652550 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:25Z","lastTransitionTime":"2026-01-29T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.682348 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.689112 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4zc4"] Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.690425 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.696517 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.696752 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.696868 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.697361 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.697448 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.697741 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.697768 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 08:41:25 crc kubenswrapper[4895]: E0129 08:41:25.701013 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.726416 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.726857 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 08:36:24 +0000 UTC, rotation deadline is 2026-12-06 19:53:19.10843469 +0000 UTC Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.726936 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7475h11m53.381500189s for next certificate rotation Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.727780 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.727813 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.727828 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.727849 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.727860 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:25Z","lastTransitionTime":"2026-01-29T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:25 crc kubenswrapper[4895]: E0129 08:41:25.764088 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.774465 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.776119 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.776152 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.776171 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.776209 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.776223 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:25Z","lastTransitionTime":"2026-01-29T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:25 crc kubenswrapper[4895]: E0129 08:41:25.815701 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: E0129 08:41:25.815860 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.818500 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829428 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-cni-netd\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829482 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovn-node-metrics-cert\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829508 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-ovn\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829535 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovnkube-config\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829558 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-etc-openvswitch\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829590 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-systemd-units\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829611 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-env-overrides\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829633 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-kubelet\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829663 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-systemd\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829690 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-node-log\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829713 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-run-netns\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829742 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829767 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829789 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-cni-bin\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829832 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-slash\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829852 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-log-socket\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829874 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovnkube-script-lib\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829901 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-var-lib-openvswitch\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829940 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-openvswitch\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.829964 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnjb9\" (UniqueName: \"kubernetes.io/projected/7621f3ab-b09c-4a23-8031-645d96fe5c9b-kube-api-access-xnjb9\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.836996 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.837058 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.837078 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.837107 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.837126 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:25Z","lastTransitionTime":"2026-01-29T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.843554 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.864534 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.884673 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.906361 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.923796 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.930659 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovnkube-script-lib\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.930718 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-slash\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.930756 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-log-socket\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.931480 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-log-socket\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.931664 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-slash\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.931735 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnjb9\" (UniqueName: \"kubernetes.io/projected/7621f3ab-b09c-4a23-8031-645d96fe5c9b-kube-api-access-xnjb9\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.931801 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-var-lib-openvswitch\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.931891 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-openvswitch\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.932005 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovnkube-script-lib\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.932137 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-var-lib-openvswitch\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.932529 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-openvswitch\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.932626 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovn-node-metrics-cert\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.932677 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-cni-netd\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.932714 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-ovn\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.932746 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovnkube-config\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.932778 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-etc-openvswitch\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.932817 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-systemd-units\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.932870 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-env-overrides\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.932951 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-kubelet\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.933006 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-systemd\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.933035 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-node-log\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.933064 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-run-netns\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.933104 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.933147 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.933173 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-cni-bin\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.933280 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-cni-bin\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.934533 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-node-log\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.934619 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-kubelet\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.934665 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-systemd\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.934720 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.934777 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-run-netns\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.934779 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-env-overrides\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.934824 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.934961 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-etc-openvswitch\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.935051 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-systemd-units\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.935101 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-cni-netd\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.935149 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-ovn\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.935583 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovnkube-config\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.938910 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovn-node-metrics-cert\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.947328 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.947374 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.947388 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.947408 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.947422 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:25Z","lastTransitionTime":"2026-01-29T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.950696 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.959070 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnjb9\" (UniqueName: \"kubernetes.io/projected/7621f3ab-b09c-4a23-8031-645d96fe5c9b-kube-api-access-xnjb9\") pod \"ovnkube-node-x4zc4\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.969872 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:25 crc kubenswrapper[4895]: I0129 08:41:25.988558 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:25Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.008374 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.011370 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.028348 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: W0129 08:41:26.035171 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7621f3ab_b09c_4a23_8031_645d96fe5c9b.slice/crio-886fdd8f98afea7698efa10e946e86153a52d6585d4c7ae7db6c0184cacbf33a WatchSource:0}: Error finding container 886fdd8f98afea7698efa10e946e86153a52d6585d4c7ae7db6c0184cacbf33a: Status 404 returned error can't find the container with id 886fdd8f98afea7698efa10e946e86153a52d6585d4c7ae7db6c0184cacbf33a Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.051098 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.051136 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.051150 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.051173 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.051188 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:26Z","lastTransitionTime":"2026-01-29T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.069738 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.088691 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.112746 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.154469 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.154521 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.154536 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.154558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.154574 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:26Z","lastTransitionTime":"2026-01-29T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.172707 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:12:32.983472137 +0000 UTC Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.210521 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:26 crc kubenswrapper[4895]: E0129 08:41:26.210817 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.211158 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.211176 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:26 crc kubenswrapper[4895]: E0129 08:41:26.211605 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:26 crc kubenswrapper[4895]: E0129 08:41:26.211725 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.257935 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.257994 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.258005 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.258027 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.258039 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:26Z","lastTransitionTime":"2026-01-29T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.355485 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wfxqf" event={"ID":"da81d90f-1b31-410e-8de7-2f5d25b99a34","Type":"ContainerStarted","Data":"18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.358364 4895 generic.go:334] "Generic (PLEG): container finished" podID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerID="5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a" exitCode=0 Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.358452 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerDied","Data":"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.358799 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerStarted","Data":"886fdd8f98afea7698efa10e946e86153a52d6585d4c7ae7db6c0184cacbf33a"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.361344 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.361384 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.361395 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.361424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.361442 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:26Z","lastTransitionTime":"2026-01-29T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.362522 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerStarted","Data":"15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.362575 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerStarted","Data":"fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.362594 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerStarted","Data":"1037702e349e858d98f4060ec0e9ff6d8b5ce1cffbbc7159e28ca912298745be"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.364285 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b4dgj" event={"ID":"69ba7dcf-e7a0-4408-983b-09a07851d01c","Type":"ContainerStarted","Data":"1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.364324 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b4dgj" event={"ID":"69ba7dcf-e7a0-4408-983b-09a07851d01c","Type":"ContainerStarted","Data":"86cb6945a1e2f778ffafb815b47c6ca0718fbd6ce7a8222e9836e1c7feb067e6"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.368525 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" event={"ID":"be953ef9-0feb-4327-ba58-0e29287bab39","Type":"ContainerStarted","Data":"cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.368598 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" event={"ID":"be953ef9-0feb-4327-ba58-0e29287bab39","Type":"ContainerStarted","Data":"33d56ecea5f82273e31898b652c197effc4ecabb4056e7140680a6f32950de3d"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.376492 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.400279 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.419595 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.439954 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.454077 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.463868 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.463915 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.463956 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.463972 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.463982 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:26Z","lastTransitionTime":"2026-01-29T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.470482 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.488550 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.515531 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.532074 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.549405 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.574883 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.580236 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.580280 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.580292 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.580309 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.580320 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:26Z","lastTransitionTime":"2026-01-29T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.590566 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.604573 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.621089 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.636698 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.651389 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.683065 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.683116 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.683127 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.683146 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.683158 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:26Z","lastTransitionTime":"2026-01-29T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.690404 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.709636 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.726312 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.739764 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.750896 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.761689 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.780017 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.785202 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.785255 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.785266 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.785283 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.785295 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:26Z","lastTransitionTime":"2026-01-29T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.801324 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.815591 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.833738 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:26Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.887396 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.887457 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.887501 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.887521 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.887536 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:26Z","lastTransitionTime":"2026-01-29T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.993359 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.993418 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.993431 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.993459 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:26 crc kubenswrapper[4895]: I0129 08:41:26.993474 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:26Z","lastTransitionTime":"2026-01-29T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.101582 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.101627 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.101639 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.101657 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.101671 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:27Z","lastTransitionTime":"2026-01-29T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.173053 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 09:15:05.613715833 +0000 UTC Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.205080 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.205122 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.205150 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.205178 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.205190 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:27Z","lastTransitionTime":"2026-01-29T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.308298 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.308346 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.308355 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.308371 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.308382 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:27Z","lastTransitionTime":"2026-01-29T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.380362 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerStarted","Data":"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.380410 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerStarted","Data":"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.380421 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerStarted","Data":"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.380429 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerStarted","Data":"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.380438 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerStarted","Data":"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.380447 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerStarted","Data":"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.381732 4895 generic.go:334] "Generic (PLEG): container finished" podID="be953ef9-0feb-4327-ba58-0e29287bab39" containerID="cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8" exitCode=0 Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.382252 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" event={"ID":"be953ef9-0feb-4327-ba58-0e29287bab39","Type":"ContainerDied","Data":"cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.407953 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.411132 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.411184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.411194 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.411213 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.411248 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:27Z","lastTransitionTime":"2026-01-29T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.429858 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.445045 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.460531 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.474692 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.488889 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.507780 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.515807 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.515859 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.515869 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.515886 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.515899 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:27Z","lastTransitionTime":"2026-01-29T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.529886 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.546200 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.557301 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.569944 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.586377 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.603362 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.622429 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.622478 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.622490 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.622508 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.622521 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:27Z","lastTransitionTime":"2026-01-29T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.730024 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.730062 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.730072 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.730088 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.730101 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:27Z","lastTransitionTime":"2026-01-29T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.832825 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.832883 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.832898 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.832932 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.832948 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:27Z","lastTransitionTime":"2026-01-29T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.854338 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:27 crc kubenswrapper[4895]: E0129 08:41:27.854447 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:35.854430313 +0000 UTC m=+37.495938459 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.854614 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.854653 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:27 crc kubenswrapper[4895]: E0129 08:41:27.854750 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:41:27 crc kubenswrapper[4895]: E0129 08:41:27.854798 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:35.854791452 +0000 UTC m=+37.496299598 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:41:27 crc kubenswrapper[4895]: E0129 08:41:27.854873 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:41:27 crc kubenswrapper[4895]: E0129 08:41:27.855048 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:35.854990277 +0000 UTC m=+37.496498423 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.941214 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.941263 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.941271 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.941285 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.941294 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:27Z","lastTransitionTime":"2026-01-29T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.955955 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:27 crc kubenswrapper[4895]: I0129 08:41:27.956027 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:27 crc kubenswrapper[4895]: E0129 08:41:27.956182 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:41:27 crc kubenswrapper[4895]: E0129 08:41:27.956221 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:41:27 crc kubenswrapper[4895]: E0129 08:41:27.956233 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:27 crc kubenswrapper[4895]: E0129 08:41:27.956289 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:35.956271772 +0000 UTC m=+37.597779918 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:27 crc kubenswrapper[4895]: E0129 08:41:27.956631 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:41:27 crc kubenswrapper[4895]: E0129 08:41:27.956704 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:41:27 crc kubenswrapper[4895]: E0129 08:41:27.956722 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:27 crc kubenswrapper[4895]: E0129 08:41:27.956755 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:35.956746563 +0000 UTC m=+37.598254779 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.044433 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.044475 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.044484 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.044501 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.044513 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:28Z","lastTransitionTime":"2026-01-29T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.148112 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.148451 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.148522 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.148590 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.148715 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:28Z","lastTransitionTime":"2026-01-29T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.173639 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:37:46.493821367 +0000 UTC Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.211093 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.211155 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.211182 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:28 crc kubenswrapper[4895]: E0129 08:41:28.211255 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:28 crc kubenswrapper[4895]: E0129 08:41:28.211276 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:28 crc kubenswrapper[4895]: E0129 08:41:28.211330 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.252810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.252872 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.252889 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.252934 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.252952 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:28Z","lastTransitionTime":"2026-01-29T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.356715 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.356773 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.356784 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.356809 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.356825 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:28Z","lastTransitionTime":"2026-01-29T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.387893 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" event={"ID":"be953ef9-0feb-4327-ba58-0e29287bab39","Type":"ContainerStarted","Data":"9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0"} Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.390187 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-q9lpx"] Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.390664 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-q9lpx" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.394454 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.394571 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.394607 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.396003 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.410395 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.427652 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.445213 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.460185 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwz92\" (UniqueName: \"kubernetes.io/projected/2cc592fc-c35e-4480-9cb1-2f7d122f05bd-kube-api-access-pwz92\") pod \"node-ca-q9lpx\" (UID: \"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\") " pod="openshift-image-registry/node-ca-q9lpx" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.460283 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cc592fc-c35e-4480-9cb1-2f7d122f05bd-host\") pod \"node-ca-q9lpx\" (UID: \"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\") " pod="openshift-image-registry/node-ca-q9lpx" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.460329 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2cc592fc-c35e-4480-9cb1-2f7d122f05bd-serviceca\") pod \"node-ca-q9lpx\" (UID: \"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\") " pod="openshift-image-registry/node-ca-q9lpx" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.460601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.460647 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.460660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.460685 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.460702 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:28Z","lastTransitionTime":"2026-01-29T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.464392 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.479854 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.495283 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.514830 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.540866 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.557344 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.560755 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cc592fc-c35e-4480-9cb1-2f7d122f05bd-host\") pod \"node-ca-q9lpx\" (UID: \"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\") " pod="openshift-image-registry/node-ca-q9lpx" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.560792 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2cc592fc-c35e-4480-9cb1-2f7d122f05bd-serviceca\") pod \"node-ca-q9lpx\" (UID: \"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\") " pod="openshift-image-registry/node-ca-q9lpx" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.560835 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwz92\" (UniqueName: \"kubernetes.io/projected/2cc592fc-c35e-4480-9cb1-2f7d122f05bd-kube-api-access-pwz92\") pod \"node-ca-q9lpx\" (UID: \"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\") " pod="openshift-image-registry/node-ca-q9lpx" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.560887 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cc592fc-c35e-4480-9cb1-2f7d122f05bd-host\") pod \"node-ca-q9lpx\" (UID: \"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\") " pod="openshift-image-registry/node-ca-q9lpx" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.563167 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2cc592fc-c35e-4480-9cb1-2f7d122f05bd-serviceca\") pod \"node-ca-q9lpx\" (UID: \"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\") " pod="openshift-image-registry/node-ca-q9lpx" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.563404 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.563450 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.563462 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.563480 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.563812 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:28Z","lastTransitionTime":"2026-01-29T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.573632 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.585248 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwz92\" (UniqueName: \"kubernetes.io/projected/2cc592fc-c35e-4480-9cb1-2f7d122f05bd-kube-api-access-pwz92\") pod \"node-ca-q9lpx\" (UID: \"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\") " pod="openshift-image-registry/node-ca-q9lpx" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.589359 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.604297 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.614637 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.629799 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.643625 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.662073 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.666761 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.666805 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.666817 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.666840 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.666854 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:28Z","lastTransitionTime":"2026-01-29T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.681673 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.695340 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.712796 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.727692 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.740789 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.755472 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.769633 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.769693 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.769707 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.769732 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.769753 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:28Z","lastTransitionTime":"2026-01-29T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.770765 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.783998 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.804033 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.818234 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.820060 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-q9lpx" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.837464 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:28Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.872377 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.872425 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.872437 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.872459 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.872471 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:28Z","lastTransitionTime":"2026-01-29T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.923962 4895 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 08:41:28 crc kubenswrapper[4895]: W0129 08:41:28.924563 4895 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 29 08:41:28 crc kubenswrapper[4895]: W0129 08:41:28.925444 4895 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 29 08:41:28 crc kubenswrapper[4895]: W0129 08:41:28.926045 4895 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 29 08:41:28 crc kubenswrapper[4895]: W0129 08:41:28.926306 4895 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.977123 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.977162 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.977173 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.977190 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:28 crc kubenswrapper[4895]: I0129 08:41:28.977201 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:28Z","lastTransitionTime":"2026-01-29T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.080371 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.080410 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.080418 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.080436 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.080447 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:29Z","lastTransitionTime":"2026-01-29T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.174368 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 12:28:14.830233409 +0000 UTC Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.184950 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.185025 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.185042 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.185064 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.185102 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:29Z","lastTransitionTime":"2026-01-29T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.228209 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.244906 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.261691 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.280017 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.288935 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.288964 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.288974 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.288989 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.289003 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:29Z","lastTransitionTime":"2026-01-29T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.297743 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.312683 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.328669 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.354406 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.379712 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.391666 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.391727 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.391737 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.391761 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.391773 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:29Z","lastTransitionTime":"2026-01-29T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.395571 4895 generic.go:334] "Generic (PLEG): container finished" podID="be953ef9-0feb-4327-ba58-0e29287bab39" containerID="9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0" exitCode=0 Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.395949 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" event={"ID":"be953ef9-0feb-4327-ba58-0e29287bab39","Type":"ContainerDied","Data":"9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0"} Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.397046 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.399024 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-q9lpx" event={"ID":"2cc592fc-c35e-4480-9cb1-2f7d122f05bd","Type":"ContainerStarted","Data":"863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8"} Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.399103 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-q9lpx" event={"ID":"2cc592fc-c35e-4480-9cb1-2f7d122f05bd","Type":"ContainerStarted","Data":"ed4029e50e104dfe6f31c0aa89aa0bccd1dd6fe5d5e8a83ac4d4a718c7be67d8"} Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.412288 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.427709 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.449247 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.464525 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.489553 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.496884 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.496941 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.496951 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.496970 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.496985 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:29Z","lastTransitionTime":"2026-01-29T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.504513 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.519559 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.538368 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.553290 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.570146 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.591740 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.599519 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.599562 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.599573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.599591 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.599604 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:29Z","lastTransitionTime":"2026-01-29T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.608387 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.618566 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.633422 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.648452 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.673533 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.689068 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.702562 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.702621 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.702634 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.702657 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.702672 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:29Z","lastTransitionTime":"2026-01-29T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.705128 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.805515 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.805567 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.805582 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.805605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.805624 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:29Z","lastTransitionTime":"2026-01-29T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.857734 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.908532 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.908573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.908582 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.908598 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:29 crc kubenswrapper[4895]: I0129 08:41:29.908608 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:29Z","lastTransitionTime":"2026-01-29T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.003797 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.010997 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.011072 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.011093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.011122 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.011149 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:30Z","lastTransitionTime":"2026-01-29T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.082510 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.115236 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.115301 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.115326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.115353 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.115373 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:30Z","lastTransitionTime":"2026-01-29T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.175070 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:43:34.291749236 +0000 UTC Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.211204 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.211325 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:30 crc kubenswrapper[4895]: E0129 08:41:30.211374 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.211383 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:30 crc kubenswrapper[4895]: E0129 08:41:30.211486 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:30 crc kubenswrapper[4895]: E0129 08:41:30.211575 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.218311 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.218364 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.218380 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.218399 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.218411 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:30Z","lastTransitionTime":"2026-01-29T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.321852 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.321907 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.321932 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.321949 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.321960 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:30Z","lastTransitionTime":"2026-01-29T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.406277 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerStarted","Data":"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659"} Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.408485 4895 generic.go:334] "Generic (PLEG): container finished" podID="be953ef9-0feb-4327-ba58-0e29287bab39" containerID="371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200" exitCode=0 Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.408518 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" event={"ID":"be953ef9-0feb-4327-ba58-0e29287bab39","Type":"ContainerDied","Data":"371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200"} Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.424076 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.424126 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.424141 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.424161 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.424175 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:30Z","lastTransitionTime":"2026-01-29T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.425676 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.449280 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.484998 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.516416 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.527160 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.527206 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.527215 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.527231 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.527245 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:30Z","lastTransitionTime":"2026-01-29T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.573736 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.598448 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.613786 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.626819 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.630863 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.630905 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.630934 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.630958 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.630972 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:30Z","lastTransitionTime":"2026-01-29T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.643817 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.668718 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.684708 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.699656 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.717351 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.735205 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.735247 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.735259 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.735274 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.735286 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:30Z","lastTransitionTime":"2026-01-29T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.737111 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.751475 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:30Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.837689 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.837749 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.837767 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.837793 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.837810 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:30Z","lastTransitionTime":"2026-01-29T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.940887 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.940968 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.940983 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.941008 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:30 crc kubenswrapper[4895]: I0129 08:41:30.941022 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:30Z","lastTransitionTime":"2026-01-29T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.043877 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.043972 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.043988 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.044019 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.044033 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:31Z","lastTransitionTime":"2026-01-29T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.147864 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.147956 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.147968 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.147988 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.147999 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:31Z","lastTransitionTime":"2026-01-29T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.175997 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:55:28.063427177 +0000 UTC Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.250578 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.250643 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.250656 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.250674 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.250689 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:31Z","lastTransitionTime":"2026-01-29T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.354029 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.354074 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.354084 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.354098 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.354110 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:31Z","lastTransitionTime":"2026-01-29T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.414252 4895 generic.go:334] "Generic (PLEG): container finished" podID="be953ef9-0feb-4327-ba58-0e29287bab39" containerID="0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1" exitCode=0 Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.414337 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" event={"ID":"be953ef9-0feb-4327-ba58-0e29287bab39","Type":"ContainerDied","Data":"0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1"} Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.429629 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:31Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.446217 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:31Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.457472 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.457537 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.457550 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.457571 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.457585 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:31Z","lastTransitionTime":"2026-01-29T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.461754 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:31Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.474175 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:31Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.489343 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:31Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.502984 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:31Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.518114 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:31Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.535440 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:31Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.560126 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.560161 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.560170 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.560185 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.560197 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:31Z","lastTransitionTime":"2026-01-29T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.566080 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:31Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.580070 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:31Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.593826 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:31Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.613940 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:31Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.641775 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:31Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.661724 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:31Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.662870 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.662946 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.662958 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.662976 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.662988 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:31Z","lastTransitionTime":"2026-01-29T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.765413 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.765461 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.765474 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.765494 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.765507 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:31Z","lastTransitionTime":"2026-01-29T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.868246 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.868610 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.868619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.868634 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.868644 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:31Z","lastTransitionTime":"2026-01-29T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.972685 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.972733 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.972743 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.972759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:31 crc kubenswrapper[4895]: I0129 08:41:31.972769 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:31Z","lastTransitionTime":"2026-01-29T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.075527 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.075589 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.075602 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.075625 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.075656 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:32Z","lastTransitionTime":"2026-01-29T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.176258 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:56:20.596000799 +0000 UTC Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.179498 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.179538 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.179552 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.179572 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.179583 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:32Z","lastTransitionTime":"2026-01-29T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.210452 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.210532 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:32 crc kubenswrapper[4895]: E0129 08:41:32.210613 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:32 crc kubenswrapper[4895]: E0129 08:41:32.210734 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.210811 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:32 crc kubenswrapper[4895]: E0129 08:41:32.210866 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.283022 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.283072 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.283082 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.283101 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.283112 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:32Z","lastTransitionTime":"2026-01-29T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.389758 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.389808 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.389823 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.389847 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.389870 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:32Z","lastTransitionTime":"2026-01-29T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.424151 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerStarted","Data":"25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c"} Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.424250 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.424271 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.430222 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" event={"ID":"be953ef9-0feb-4327-ba58-0e29287bab39","Type":"ContainerStarted","Data":"d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545"} Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.441389 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.480457 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.480452 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.481087 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.495471 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.496021 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.496056 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.496065 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.496089 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.496105 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:32Z","lastTransitionTime":"2026-01-29T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.513884 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.529902 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.553349 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.573106 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.588004 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.598227 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.598288 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.598304 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.598324 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.598338 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:32Z","lastTransitionTime":"2026-01-29T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.608858 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.632859 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.648604 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.662522 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.675295 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.685385 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.699045 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.700811 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.700861 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.700873 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.700895 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.700909 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:32Z","lastTransitionTime":"2026-01-29T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.715153 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.731899 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.745661 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.763398 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.776728 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.788735 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.793824 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.804352 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.804409 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.804421 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.804441 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.804454 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:32Z","lastTransitionTime":"2026-01-29T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.812385 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.835182 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.853664 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.872811 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.896190 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.907021 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.907060 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.907076 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.907097 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.907112 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:32Z","lastTransitionTime":"2026-01-29T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.918577 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.945570 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.959431 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:32 crc kubenswrapper[4895]: I0129 08:41:32.982518 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.007324 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:32Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.009485 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.009541 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.009555 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.009574 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.009853 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:33Z","lastTransitionTime":"2026-01-29T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.024165 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:33Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.040000 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:33Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.059610 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:33Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.077562 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:33Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.091415 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:33Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.109650 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:33Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.112324 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.112365 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.112376 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.112392 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.112404 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:33Z","lastTransitionTime":"2026-01-29T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.131063 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:33Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.146749 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:33Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.161270 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:33Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.174966 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:33Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.176930 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 19:34:08.382688636 +0000 UTC Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.187710 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:33Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.214540 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.214587 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.214600 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.214619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.214630 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:33Z","lastTransitionTime":"2026-01-29T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.317229 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.317275 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.317287 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.317306 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.317320 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:33Z","lastTransitionTime":"2026-01-29T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.421004 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.421060 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.421072 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.421097 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.421111 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:33Z","lastTransitionTime":"2026-01-29T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.433438 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.524980 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.525060 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.525085 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.525114 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.525131 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:33Z","lastTransitionTime":"2026-01-29T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.628509 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.628580 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.628593 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.628615 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.628628 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:33Z","lastTransitionTime":"2026-01-29T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.731611 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.731653 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.731663 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.731686 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.731699 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:33Z","lastTransitionTime":"2026-01-29T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.836296 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.836702 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.836717 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.836741 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.836755 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:33Z","lastTransitionTime":"2026-01-29T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.939738 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.939779 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.939790 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.939809 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:33 crc kubenswrapper[4895]: I0129 08:41:33.939821 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:33Z","lastTransitionTime":"2026-01-29T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.042682 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.042724 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.042735 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.042752 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.042766 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:34Z","lastTransitionTime":"2026-01-29T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.146142 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.146194 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.146209 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.146230 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.146248 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:34Z","lastTransitionTime":"2026-01-29T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.177223 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 22:20:17.166251496 +0000 UTC Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.210676 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:34 crc kubenswrapper[4895]: E0129 08:41:34.210864 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.211352 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:34 crc kubenswrapper[4895]: E0129 08:41:34.211570 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.211660 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:34 crc kubenswrapper[4895]: E0129 08:41:34.211884 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.248623 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.248693 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.248704 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.248721 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.248735 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:34Z","lastTransitionTime":"2026-01-29T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.351079 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.351120 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.351129 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.351145 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.351155 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:34Z","lastTransitionTime":"2026-01-29T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.441378 4895 generic.go:334] "Generic (PLEG): container finished" podID="be953ef9-0feb-4327-ba58-0e29287bab39" containerID="d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545" exitCode=0 Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.441461 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" event={"ID":"be953ef9-0feb-4327-ba58-0e29287bab39","Type":"ContainerDied","Data":"d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545"} Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.441584 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.454458 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.454507 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.454520 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.454541 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.454556 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:34Z","lastTransitionTime":"2026-01-29T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.460861 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.476864 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.493263 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.514364 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.535519 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.550434 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.558263 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.558312 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.558325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.558347 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.558362 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:34Z","lastTransitionTime":"2026-01-29T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.565137 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.579406 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.595601 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.609746 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.624341 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.637480 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.653666 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.660732 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.660782 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.660795 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.660814 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.660825 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:34Z","lastTransitionTime":"2026-01-29T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.670603 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:34Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.764410 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.764461 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.764474 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.764493 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.764508 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:34Z","lastTransitionTime":"2026-01-29T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.867102 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.867146 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.867157 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.867172 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.867185 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:34Z","lastTransitionTime":"2026-01-29T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.970339 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.970385 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.970395 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.970412 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:34 crc kubenswrapper[4895]: I0129 08:41:34.970422 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:34Z","lastTransitionTime":"2026-01-29T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.073111 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.073150 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.073160 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.073178 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.073188 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:35Z","lastTransitionTime":"2026-01-29T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.176833 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.176901 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.176948 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.176976 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.176993 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:35Z","lastTransitionTime":"2026-01-29T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.178041 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 10:31:57.214434223 +0000 UTC Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.279688 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.280156 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.280246 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.280326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.280399 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:35Z","lastTransitionTime":"2026-01-29T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.383519 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.383563 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.383573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.383589 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.383600 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:35Z","lastTransitionTime":"2026-01-29T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.450670 4895 generic.go:334] "Generic (PLEG): container finished" podID="be953ef9-0feb-4327-ba58-0e29287bab39" containerID="220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749" exitCode=0 Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.450747 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" event={"ID":"be953ef9-0feb-4327-ba58-0e29287bab39","Type":"ContainerDied","Data":"220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749"} Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.468257 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.487252 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.487735 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.487749 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.487771 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.487789 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:35Z","lastTransitionTime":"2026-01-29T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.496384 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.512817 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.531313 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.544428 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.557440 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.578751 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.590295 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.590337 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.590351 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.590368 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.590381 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:35Z","lastTransitionTime":"2026-01-29T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.598950 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.611796 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.626949 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.644230 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.658487 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.672511 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.687488 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:35Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.694181 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.694229 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.694240 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.694261 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.694275 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:35Z","lastTransitionTime":"2026-01-29T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.797564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.797621 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.797638 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.797660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.797675 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:35Z","lastTransitionTime":"2026-01-29T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.861138 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.861375 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:35 crc kubenswrapper[4895]: E0129 08:41:35.861464 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:41:51.861401348 +0000 UTC m=+53.502909504 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:35 crc kubenswrapper[4895]: E0129 08:41:35.861550 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:41:35 crc kubenswrapper[4895]: E0129 08:41:35.861645 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:51.861619242 +0000 UTC m=+53.503127388 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.861677 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:35 crc kubenswrapper[4895]: E0129 08:41:35.861796 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:41:35 crc kubenswrapper[4895]: E0129 08:41:35.861883 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:51.861870658 +0000 UTC m=+53.503378804 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.901496 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.901546 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.901556 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.901574 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.901585 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:35Z","lastTransitionTime":"2026-01-29T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.963828 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:35 crc kubenswrapper[4895]: E0129 08:41:35.964048 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:41:35 crc kubenswrapper[4895]: E0129 08:41:35.964089 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:41:35 crc kubenswrapper[4895]: E0129 08:41:35.964103 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:35 crc kubenswrapper[4895]: I0129 08:41:35.964115 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:35 crc kubenswrapper[4895]: E0129 08:41:35.964176 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:51.964149948 +0000 UTC m=+53.605658194 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:35 crc kubenswrapper[4895]: E0129 08:41:35.964436 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:41:35 crc kubenswrapper[4895]: E0129 08:41:35.964480 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:41:35 crc kubenswrapper[4895]: E0129 08:41:35.964501 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:35 crc kubenswrapper[4895]: E0129 08:41:35.964589 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:41:51.964562917 +0000 UTC m=+53.606071063 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.004510 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.004558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.004572 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.004592 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.004604 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:36Z","lastTransitionTime":"2026-01-29T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.107640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.107680 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.107692 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.107721 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.107740 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:36Z","lastTransitionTime":"2026-01-29T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.164645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.164709 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.164721 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.164747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.164760 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:36Z","lastTransitionTime":"2026-01-29T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.178578 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:57:16.56332666 +0000 UTC Jan 29 08:41:36 crc kubenswrapper[4895]: E0129 08:41:36.181986 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.186668 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.186724 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.186736 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.186763 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.186779 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:36Z","lastTransitionTime":"2026-01-29T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:36 crc kubenswrapper[4895]: E0129 08:41:36.202198 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.206812 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.206854 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.206865 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.206887 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.206901 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:36Z","lastTransitionTime":"2026-01-29T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.210993 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.211060 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.211097 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:36 crc kubenswrapper[4895]: E0129 08:41:36.211212 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:36 crc kubenswrapper[4895]: E0129 08:41:36.211314 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:36 crc kubenswrapper[4895]: E0129 08:41:36.211413 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:36 crc kubenswrapper[4895]: E0129 08:41:36.220344 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.231041 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.231092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.231105 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.231126 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.231141 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:36Z","lastTransitionTime":"2026-01-29T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:36 crc kubenswrapper[4895]: E0129 08:41:36.260255 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.267084 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.267134 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.267147 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.267166 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.267176 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:36Z","lastTransitionTime":"2026-01-29T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:36 crc kubenswrapper[4895]: E0129 08:41:36.300385 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: E0129 08:41:36.300880 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.303245 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.303301 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.303314 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.303340 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.303353 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:36Z","lastTransitionTime":"2026-01-29T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.406412 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.406450 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.406463 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.406480 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.406495 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:36Z","lastTransitionTime":"2026-01-29T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.459089 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" event={"ID":"be953ef9-0feb-4327-ba58-0e29287bab39","Type":"ContainerStarted","Data":"1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02"} Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.478678 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.496948 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.509199 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.509262 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.509275 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.509297 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.509310 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:36Z","lastTransitionTime":"2026-01-29T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.517801 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.538318 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.549485 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.561441 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.575403 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.590902 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.610789 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.612020 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.612057 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.612069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.612086 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.612099 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:36Z","lastTransitionTime":"2026-01-29T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.637750 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.652565 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.667642 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.681471 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.692314 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:36Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.716547 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.716618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.716631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.716650 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.716663 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:36Z","lastTransitionTime":"2026-01-29T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.819111 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.819151 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.819164 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.819180 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.819192 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:36Z","lastTransitionTime":"2026-01-29T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.922232 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.922310 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.922320 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.922336 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:36 crc kubenswrapper[4895]: I0129 08:41:36.922350 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:36Z","lastTransitionTime":"2026-01-29T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.024680 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.024731 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.024744 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.024764 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.024777 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:37Z","lastTransitionTime":"2026-01-29T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.128088 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.128138 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.128151 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.128171 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.128181 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:37Z","lastTransitionTime":"2026-01-29T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.179066 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:51:58.907476906 +0000 UTC Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.231054 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.231105 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.231117 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.231161 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.231174 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:37Z","lastTransitionTime":"2026-01-29T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.334785 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.334827 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.334839 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.334857 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.334869 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:37Z","lastTransitionTime":"2026-01-29T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.437694 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.437766 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.437788 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.437817 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.437835 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:37Z","lastTransitionTime":"2026-01-29T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.540430 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.540484 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.540498 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.540517 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.540531 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:37Z","lastTransitionTime":"2026-01-29T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.644260 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.644326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.644336 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.644352 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.644363 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:37Z","lastTransitionTime":"2026-01-29T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.746531 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.746629 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.746658 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.746708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.746766 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:37Z","lastTransitionTime":"2026-01-29T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.849127 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.849184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.849200 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.849221 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.849236 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:37Z","lastTransitionTime":"2026-01-29T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.952594 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.952642 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.952654 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.952671 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:37 crc kubenswrapper[4895]: I0129 08:41:37.952684 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:37Z","lastTransitionTime":"2026-01-29T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.055206 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.055255 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.055267 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.055285 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.055298 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:38Z","lastTransitionTime":"2026-01-29T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.158175 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.158234 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.158247 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.158299 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.158315 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:38Z","lastTransitionTime":"2026-01-29T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.179809 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:36:19.227895956 +0000 UTC Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.210681 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.210785 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.210859 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:38 crc kubenswrapper[4895]: E0129 08:41:38.211066 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:38 crc kubenswrapper[4895]: E0129 08:41:38.211156 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:38 crc kubenswrapper[4895]: E0129 08:41:38.211243 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.261066 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.261132 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.261146 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.261167 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.261431 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:38Z","lastTransitionTime":"2026-01-29T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.364783 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.364845 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.364857 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.364876 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.364890 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:38Z","lastTransitionTime":"2026-01-29T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.468059 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.468114 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.468135 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.468159 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.468174 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:38Z","lastTransitionTime":"2026-01-29T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.469847 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/0.log" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.474414 4895 generic.go:334] "Generic (PLEG): container finished" podID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerID="25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c" exitCode=1 Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.474464 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerDied","Data":"25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c"} Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.475520 4895 scope.go:117] "RemoveContainer" containerID="25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.493371 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.512519 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.533030 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.555886 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.571016 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.572252 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.572298 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.572326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.572345 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.572356 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:38Z","lastTransitionTime":"2026-01-29T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.583809 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.588912 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk"] Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.589842 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.591588 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.591757 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.600144 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.616092 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.627972 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.639050 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.653683 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.672622 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.256935 6094 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.257094 6094 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257191 6094 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257251 6094 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257329 6094 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:37.257878 6094 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 08:41:37.257963 6094 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 08:41:37.258045 6094 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 08:41:37.258105 6094 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 08:41:37.258055 6094 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.674370 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.674394 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.674408 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.674428 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.674440 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:38Z","lastTransitionTime":"2026-01-29T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.685914 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.697607 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.700099 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd357565-d91f-44af-bc41-befbeb672385-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w8kmk\" (UID: \"cd357565-d91f-44af-bc41-befbeb672385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.700174 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd357565-d91f-44af-bc41-befbeb672385-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w8kmk\" (UID: \"cd357565-d91f-44af-bc41-befbeb672385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.700201 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gltqd\" (UniqueName: \"kubernetes.io/projected/cd357565-d91f-44af-bc41-befbeb672385-kube-api-access-gltqd\") pod \"ovnkube-control-plane-749d76644c-w8kmk\" (UID: \"cd357565-d91f-44af-bc41-befbeb672385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.700243 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd357565-d91f-44af-bc41-befbeb672385-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w8kmk\" (UID: \"cd357565-d91f-44af-bc41-befbeb672385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.712388 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.731770 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.746447 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.759280 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.776448 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.776494 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.776507 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.776526 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.776540 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:38Z","lastTransitionTime":"2026-01-29T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.779429 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.796840 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.256935 6094 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.257094 6094 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257191 6094 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257251 6094 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257329 6094 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:37.257878 6094 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 08:41:37.257963 6094 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 08:41:37.258045 6094 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 08:41:37.258105 6094 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 08:41:37.258055 6094 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.800808 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd357565-d91f-44af-bc41-befbeb672385-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w8kmk\" (UID: \"cd357565-d91f-44af-bc41-befbeb672385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.800855 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd357565-d91f-44af-bc41-befbeb672385-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w8kmk\" (UID: \"cd357565-d91f-44af-bc41-befbeb672385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.800909 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd357565-d91f-44af-bc41-befbeb672385-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w8kmk\" (UID: \"cd357565-d91f-44af-bc41-befbeb672385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.800962 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gltqd\" (UniqueName: \"kubernetes.io/projected/cd357565-d91f-44af-bc41-befbeb672385-kube-api-access-gltqd\") pod \"ovnkube-control-plane-749d76644c-w8kmk\" (UID: \"cd357565-d91f-44af-bc41-befbeb672385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.801901 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd357565-d91f-44af-bc41-befbeb672385-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w8kmk\" (UID: \"cd357565-d91f-44af-bc41-befbeb672385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.802201 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd357565-d91f-44af-bc41-befbeb672385-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w8kmk\" (UID: \"cd357565-d91f-44af-bc41-befbeb672385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.807544 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd357565-d91f-44af-bc41-befbeb672385-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w8kmk\" (UID: \"cd357565-d91f-44af-bc41-befbeb672385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.808699 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.817829 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gltqd\" (UniqueName: \"kubernetes.io/projected/cd357565-d91f-44af-bc41-befbeb672385-kube-api-access-gltqd\") pod \"ovnkube-control-plane-749d76644c-w8kmk\" (UID: \"cd357565-d91f-44af-bc41-befbeb672385\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.824797 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.835438 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.848811 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.861054 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.874163 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.880831 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.880885 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.880897 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.880940 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.880953 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:38Z","lastTransitionTime":"2026-01-29T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.889190 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.902300 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.902761 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.915163 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:38Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:38 crc kubenswrapper[4895]: W0129 08:41:38.915797 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd357565_d91f_44af_bc41_befbeb672385.slice/crio-e485099def86f4063dadefe7b5331ae765f36bee374fc5bdccb99fa7391ca7c7 WatchSource:0}: Error finding container e485099def86f4063dadefe7b5331ae765f36bee374fc5bdccb99fa7391ca7c7: Status 404 returned error can't find the container with id e485099def86f4063dadefe7b5331ae765f36bee374fc5bdccb99fa7391ca7c7 Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.983763 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.983806 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.983818 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.983837 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:38 crc kubenswrapper[4895]: I0129 08:41:38.983850 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:38Z","lastTransitionTime":"2026-01-29T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.087011 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.087051 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.087060 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.087078 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.087089 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:39Z","lastTransitionTime":"2026-01-29T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.180089 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 03:44:06.515682507 +0000 UTC Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.189832 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.189870 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.189880 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.189895 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.189908 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:39Z","lastTransitionTime":"2026-01-29T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.227544 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.241103 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.256233 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.276140 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.291460 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.291787 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.291808 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.291820 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.291840 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.291853 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:39Z","lastTransitionTime":"2026-01-29T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.307632 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.320459 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.333692 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.348658 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.362318 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.379439 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.391368 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.394790 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.394832 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.394846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.394865 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.394877 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:39Z","lastTransitionTime":"2026-01-29T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.404613 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.424271 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.445798 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.256935 6094 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.257094 6094 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257191 6094 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257251 6094 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257329 6094 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:37.257878 6094 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 08:41:37.257963 6094 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 08:41:37.258045 6094 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 08:41:37.258105 6094 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 08:41:37.258055 6094 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.480984 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/0.log" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.485502 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerStarted","Data":"67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67"} Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.485703 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.487367 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" event={"ID":"cd357565-d91f-44af-bc41-befbeb672385","Type":"ContainerStarted","Data":"ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4"} Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.487518 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" event={"ID":"cd357565-d91f-44af-bc41-befbeb672385","Type":"ContainerStarted","Data":"e485099def86f4063dadefe7b5331ae765f36bee374fc5bdccb99fa7391ca7c7"} Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.498372 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.498423 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.498433 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.498452 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.498465 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:39Z","lastTransitionTime":"2026-01-29T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.504588 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.521553 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.540934 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.551739 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.563390 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.587363 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.256935 6094 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.257094 6094 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257191 6094 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257251 6094 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257329 6094 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:37.257878 6094 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 08:41:37.257963 6094 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 08:41:37.258045 6094 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 08:41:37.258105 6094 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 08:41:37.258055 6094 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.601127 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.601478 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.601587 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.601670 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.601733 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:39Z","lastTransitionTime":"2026-01-29T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.602591 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.617072 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.634263 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.650140 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.667253 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.671666 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-g4585"] Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.672344 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:39 crc kubenswrapper[4895]: E0129 08:41:39.672489 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.683020 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.697821 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.704148 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.704190 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.704200 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.704216 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.704227 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:39Z","lastTransitionTime":"2026-01-29T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.714117 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.725586 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.747698 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.765420 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.781006 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.794209 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.806607 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.806652 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.806663 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.806680 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.806690 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:39Z","lastTransitionTime":"2026-01-29T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.809016 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.813888 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8cxd\" (UniqueName: \"kubernetes.io/projected/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-kube-api-access-v8cxd\") pod \"network-metrics-daemon-g4585\" (UID: \"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\") " pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.814109 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs\") pod \"network-metrics-daemon-g4585\" (UID: \"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\") " pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.823431 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.838399 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.862452 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.885358 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.256935 6094 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.257094 6094 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257191 6094 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257251 6094 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257329 6094 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:37.257878 6094 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 08:41:37.257963 6094 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 08:41:37.258045 6094 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 08:41:37.258105 6094 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 08:41:37.258055 6094 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.900328 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.908537 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.908751 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.908890 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.908995 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.909083 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:39Z","lastTransitionTime":"2026-01-29T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.915827 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.915933 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs\") pod \"network-metrics-daemon-g4585\" (UID: \"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\") " pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.916350 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8cxd\" (UniqueName: \"kubernetes.io/projected/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-kube-api-access-v8cxd\") pod \"network-metrics-daemon-g4585\" (UID: \"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\") " pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:39 crc kubenswrapper[4895]: E0129 08:41:39.916068 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:41:39 crc kubenswrapper[4895]: E0129 08:41:39.916550 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs podName:d167bf78-4ea9-42d8-8ab6-6aaf234e102e nodeName:}" failed. No retries permitted until 2026-01-29 08:41:40.4165052 +0000 UTC m=+42.058013506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs") pod "network-metrics-daemon-g4585" (UID: "d167bf78-4ea9-42d8-8ab6-6aaf234e102e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.933870 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.937011 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8cxd\" (UniqueName: \"kubernetes.io/projected/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-kube-api-access-v8cxd\") pod \"network-metrics-daemon-g4585\" (UID: \"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\") " pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.960817 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:39 crc kubenswrapper[4895]: I0129 08:41:39.989519 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:39Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.003930 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.011988 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.012024 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.012033 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.012049 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.012058 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:40Z","lastTransitionTime":"2026-01-29T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.018997 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.115219 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.115272 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.115282 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.115299 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.115310 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:40Z","lastTransitionTime":"2026-01-29T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.180335 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:09:23.691191646 +0000 UTC Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.210892 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.210945 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.210885 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:40 crc kubenswrapper[4895]: E0129 08:41:40.211062 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:40 crc kubenswrapper[4895]: E0129 08:41:40.211109 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:40 crc kubenswrapper[4895]: E0129 08:41:40.211169 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.217809 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.218312 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.218322 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.218339 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.218352 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:40Z","lastTransitionTime":"2026-01-29T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.321573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.321627 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.321643 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.321667 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.321683 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:40Z","lastTransitionTime":"2026-01-29T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.420638 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs\") pod \"network-metrics-daemon-g4585\" (UID: \"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\") " pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:40 crc kubenswrapper[4895]: E0129 08:41:40.420864 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:41:40 crc kubenswrapper[4895]: E0129 08:41:40.421003 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs podName:d167bf78-4ea9-42d8-8ab6-6aaf234e102e nodeName:}" failed. No retries permitted until 2026-01-29 08:41:41.420976549 +0000 UTC m=+43.062484735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs") pod "network-metrics-daemon-g4585" (UID: "d167bf78-4ea9-42d8-8ab6-6aaf234e102e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.424419 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.424472 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.424486 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.424507 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.424523 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:40Z","lastTransitionTime":"2026-01-29T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.492583 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/1.log" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.493579 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/0.log" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.496561 4895 generic.go:334] "Generic (PLEG): container finished" podID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerID="67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67" exitCode=1 Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.496691 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerDied","Data":"67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67"} Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.496821 4895 scope.go:117] "RemoveContainer" containerID="25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.498218 4895 scope.go:117] "RemoveContainer" containerID="67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67" Jan 29 08:41:40 crc kubenswrapper[4895]: E0129 08:41:40.498586 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.499781 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" event={"ID":"cd357565-d91f-44af-bc41-befbeb672385","Type":"ContainerStarted","Data":"ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf"} Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.521109 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.529302 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.529357 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.529369 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.529390 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.529404 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:40Z","lastTransitionTime":"2026-01-29T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.538141 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.553202 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.567180 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.578850 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.592579 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.605137 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.620651 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.631334 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.631390 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.631403 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.631429 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.631445 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:40Z","lastTransitionTime":"2026-01-29T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.635088 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.649452 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.664340 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.678228 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.691086 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.709016 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.738106 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.738143 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.738152 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.738171 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.738183 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:40Z","lastTransitionTime":"2026-01-29T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.738923 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.256935 6094 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.257094 6094 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257191 6094 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257251 6094 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257329 6094 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:37.257878 6094 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 08:41:37.257963 6094 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 08:41:37.258045 6094 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 08:41:37.258105 6094 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 08:41:37.258055 6094 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"message\\\":\\\"12 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 08:41:40.149840 6312 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:40.150169 6312 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:40.150472 6312 factory.go:656] Stopping watch factory\\\\nI0129 08:41:40.150503 6312 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:40.150822 6312 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:40.176244 6312 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 08:41:40.176298 6312 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 08:41:40.176415 6312 ovnkube.go:599] Stopped ovnkube\\\\nI0129 08:41:40.176477 6312 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 08:41:40.176639 6312 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.761064 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.802629 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.816530 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.829962 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.840960 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.841001 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.841012 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.841028 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.841037 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:40Z","lastTransitionTime":"2026-01-29T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.843839 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.855817 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.867655 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.878467 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.890835 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.909300 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.256935 6094 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.257094 6094 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257191 6094 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257251 6094 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257329 6094 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:37.257878 6094 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 08:41:37.257963 6094 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 08:41:37.258045 6094 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 08:41:37.258105 6094 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 08:41:37.258055 6094 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"message\\\":\\\"12 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 08:41:40.149840 6312 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:40.150169 6312 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:40.150472 6312 factory.go:656] Stopping watch factory\\\\nI0129 08:41:40.150503 6312 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:40.150822 6312 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:40.176244 6312 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 08:41:40.176298 6312 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 08:41:40.176415 6312 ovnkube.go:599] Stopped ovnkube\\\\nI0129 08:41:40.176477 6312 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 08:41:40.176639 6312 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.921103 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.934306 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.943475 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.943523 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.943536 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.943555 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.943567 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:40Z","lastTransitionTime":"2026-01-29T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.947443 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.960039 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.980094 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:40 crc kubenswrapper[4895]: I0129 08:41:40.992045 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:40Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.003648 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:41Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.046400 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.046447 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.046456 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.046477 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.046488 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:41Z","lastTransitionTime":"2026-01-29T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.149571 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.149627 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.149639 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.149661 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.149673 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:41Z","lastTransitionTime":"2026-01-29T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.181035 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:26:45.456094321 +0000 UTC Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.211269 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:41 crc kubenswrapper[4895]: E0129 08:41:41.211440 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.252196 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.252245 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.252255 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.252279 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.252293 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:41Z","lastTransitionTime":"2026-01-29T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.355160 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.355220 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.355232 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.355252 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.355265 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:41Z","lastTransitionTime":"2026-01-29T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.431067 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs\") pod \"network-metrics-daemon-g4585\" (UID: \"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\") " pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:41 crc kubenswrapper[4895]: E0129 08:41:41.431217 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:41:41 crc kubenswrapper[4895]: E0129 08:41:41.431288 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs podName:d167bf78-4ea9-42d8-8ab6-6aaf234e102e nodeName:}" failed. No retries permitted until 2026-01-29 08:41:43.431269961 +0000 UTC m=+45.072778107 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs") pod "network-metrics-daemon-g4585" (UID: "d167bf78-4ea9-42d8-8ab6-6aaf234e102e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.458049 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.458113 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.458130 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.458151 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.458164 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:41Z","lastTransitionTime":"2026-01-29T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.505389 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/1.log" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.560902 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.561015 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.561031 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.561054 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.561069 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:41Z","lastTransitionTime":"2026-01-29T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.663683 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.663747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.663759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.663786 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.663799 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:41Z","lastTransitionTime":"2026-01-29T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.766720 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.766788 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.766802 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.766823 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.766836 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:41Z","lastTransitionTime":"2026-01-29T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.869491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.869569 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.869600 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.869635 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.869658 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:41Z","lastTransitionTime":"2026-01-29T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.972236 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.972377 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.972403 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.972433 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:41 crc kubenswrapper[4895]: I0129 08:41:41.972456 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:41Z","lastTransitionTime":"2026-01-29T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.075160 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.075229 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.075243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.075260 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.075270 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:42Z","lastTransitionTime":"2026-01-29T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.181190 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:07:30.835538361 +0000 UTC Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.181565 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.181618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.181633 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.181657 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.181674 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:42Z","lastTransitionTime":"2026-01-29T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.211080 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.211127 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.211164 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:42 crc kubenswrapper[4895]: E0129 08:41:42.211274 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:42 crc kubenswrapper[4895]: E0129 08:41:42.211387 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:42 crc kubenswrapper[4895]: E0129 08:41:42.211494 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.284558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.284598 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.284606 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.284628 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.284639 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:42Z","lastTransitionTime":"2026-01-29T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.387644 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.387711 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.387733 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.387763 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.387786 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:42Z","lastTransitionTime":"2026-01-29T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.491361 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.491419 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.491437 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.491459 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.491476 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:42Z","lastTransitionTime":"2026-01-29T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.593995 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.594073 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.594098 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.594131 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.594154 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:42Z","lastTransitionTime":"2026-01-29T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.698838 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.698901 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.698951 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.698980 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.698996 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:42Z","lastTransitionTime":"2026-01-29T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.801228 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.801281 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.801301 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.801320 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.801332 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:42Z","lastTransitionTime":"2026-01-29T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.903876 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.903947 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.903960 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.903987 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:42 crc kubenswrapper[4895]: I0129 08:41:42.904001 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:42Z","lastTransitionTime":"2026-01-29T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.006892 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.007311 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.007398 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.007492 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.007595 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:43Z","lastTransitionTime":"2026-01-29T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.110650 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.110699 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.110711 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.110731 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.110748 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:43Z","lastTransitionTime":"2026-01-29T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.181450 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 20:36:02.152107365 +0000 UTC Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.211454 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:43 crc kubenswrapper[4895]: E0129 08:41:43.212464 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.214077 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.214145 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.214164 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.214190 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.214205 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:43Z","lastTransitionTime":"2026-01-29T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.317208 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.317239 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.317249 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.317266 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.317277 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:43Z","lastTransitionTime":"2026-01-29T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.420110 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.420171 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.420186 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.420208 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.420222 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:43Z","lastTransitionTime":"2026-01-29T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.453449 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs\") pod \"network-metrics-daemon-g4585\" (UID: \"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\") " pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:43 crc kubenswrapper[4895]: E0129 08:41:43.453760 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:41:43 crc kubenswrapper[4895]: E0129 08:41:43.453885 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs podName:d167bf78-4ea9-42d8-8ab6-6aaf234e102e nodeName:}" failed. No retries permitted until 2026-01-29 08:41:47.453861808 +0000 UTC m=+49.095369944 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs") pod "network-metrics-daemon-g4585" (UID: "d167bf78-4ea9-42d8-8ab6-6aaf234e102e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.522707 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.522749 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.522759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.522775 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.522784 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:43Z","lastTransitionTime":"2026-01-29T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.625725 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.625770 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.625781 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.625796 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.625805 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:43Z","lastTransitionTime":"2026-01-29T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.728478 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.728519 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.728531 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.728552 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.728564 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:43Z","lastTransitionTime":"2026-01-29T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.831574 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.831642 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.831660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.831708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.831725 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:43Z","lastTransitionTime":"2026-01-29T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.935153 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.935498 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.935570 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.935674 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:43 crc kubenswrapper[4895]: I0129 08:41:43.935752 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:43Z","lastTransitionTime":"2026-01-29T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.039366 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.039420 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.039434 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.039638 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.039652 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:44Z","lastTransitionTime":"2026-01-29T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.142449 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.142500 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.142511 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.142532 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.142543 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:44Z","lastTransitionTime":"2026-01-29T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.182551 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:30:09.453262951 +0000 UTC Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.210234 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.210272 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:44 crc kubenswrapper[4895]: E0129 08:41:44.210548 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:44 crc kubenswrapper[4895]: E0129 08:41:44.210603 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.210272 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:44 crc kubenswrapper[4895]: E0129 08:41:44.211137 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.245544 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.245596 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.245605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.245623 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.245634 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:44Z","lastTransitionTime":"2026-01-29T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.348786 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.349186 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.349278 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.349394 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.349490 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:44Z","lastTransitionTime":"2026-01-29T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.451976 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.452323 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.452397 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.452468 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.452526 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:44Z","lastTransitionTime":"2026-01-29T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.556329 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.556389 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.556401 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.556422 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.556441 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:44Z","lastTransitionTime":"2026-01-29T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.659696 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.659767 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.659783 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.659810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.659826 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:44Z","lastTransitionTime":"2026-01-29T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.763504 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.763570 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.763590 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.763614 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.763636 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:44Z","lastTransitionTime":"2026-01-29T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.866212 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.866270 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.866281 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.866301 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.866311 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:44Z","lastTransitionTime":"2026-01-29T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.968978 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.969052 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.969069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.969094 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:44 crc kubenswrapper[4895]: I0129 08:41:44.969106 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:44Z","lastTransitionTime":"2026-01-29T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.073513 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.073574 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.073590 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.073616 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.073648 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:45Z","lastTransitionTime":"2026-01-29T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.177094 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.177159 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.177178 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.177206 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.177225 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:45Z","lastTransitionTime":"2026-01-29T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.183379 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 19:20:13.277970095 +0000 UTC Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.211242 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:45 crc kubenswrapper[4895]: E0129 08:41:45.211505 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.280556 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.280601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.280617 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.280635 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.280650 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:45Z","lastTransitionTime":"2026-01-29T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.383692 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.383758 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.383795 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.383829 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.383852 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:45Z","lastTransitionTime":"2026-01-29T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.487168 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.487578 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.487715 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.487870 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.488063 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:45Z","lastTransitionTime":"2026-01-29T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.591560 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.591613 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.591626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.591649 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.591664 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:45Z","lastTransitionTime":"2026-01-29T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.695662 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.695904 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.695954 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.695996 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.696015 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:45Z","lastTransitionTime":"2026-01-29T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.798832 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.799188 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.799325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.799473 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.799601 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:45Z","lastTransitionTime":"2026-01-29T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.902761 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.902808 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.902819 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.902838 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:45 crc kubenswrapper[4895]: I0129 08:41:45.902850 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:45Z","lastTransitionTime":"2026-01-29T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.006250 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.006664 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.006743 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.006814 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.006893 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.109609 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.109677 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.109691 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.109711 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.109743 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.183977 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:03:59.382913154 +0000 UTC Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.210693 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.210822 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:46 crc kubenswrapper[4895]: E0129 08:41:46.210846 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:46 crc kubenswrapper[4895]: E0129 08:41:46.211041 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.211385 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:46 crc kubenswrapper[4895]: E0129 08:41:46.211564 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.213113 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.213171 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.213187 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.213210 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.213229 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.316423 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.316474 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.316489 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.316508 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.316519 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.317807 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.317876 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.317904 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.317936 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.317949 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: E0129 08:41:46.334845 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:46Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.340262 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.340374 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.340390 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.340414 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.340427 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: E0129 08:41:46.354255 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:46Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.358326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.358363 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.358376 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.358397 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.358410 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: E0129 08:41:46.375873 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:46Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.381003 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.381055 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.381066 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.381084 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.381096 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: E0129 08:41:46.394831 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:46Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.399184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.399239 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.399251 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.399269 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.399281 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: E0129 08:41:46.412485 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:46Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:46 crc kubenswrapper[4895]: E0129 08:41:46.412611 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.419004 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.419036 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.419047 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.419065 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.419077 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.521962 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.522001 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.522010 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.522026 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.522038 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.624526 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.624582 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.624594 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.624610 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.624620 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.727414 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.727466 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.727476 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.727493 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.727503 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.829419 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.829466 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.829475 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.829492 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.829502 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.932190 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.932237 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.932247 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.932262 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:46 crc kubenswrapper[4895]: I0129 08:41:46.932272 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:46Z","lastTransitionTime":"2026-01-29T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.035071 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.035113 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.035122 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.035137 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.035147 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:47Z","lastTransitionTime":"2026-01-29T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.137513 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.137547 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.137556 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.137573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.137586 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:47Z","lastTransitionTime":"2026-01-29T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.184585 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:16:58.935569005 +0000 UTC Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.211180 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:47 crc kubenswrapper[4895]: E0129 08:41:47.211351 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.239709 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.239746 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.239755 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.239768 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.239777 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:47Z","lastTransitionTime":"2026-01-29T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.343396 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.343492 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.343529 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.343566 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.343590 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:47Z","lastTransitionTime":"2026-01-29T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.446242 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.446335 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.446372 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.446408 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.446431 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:47Z","lastTransitionTime":"2026-01-29T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.501365 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs\") pod \"network-metrics-daemon-g4585\" (UID: \"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\") " pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:47 crc kubenswrapper[4895]: E0129 08:41:47.501577 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:41:47 crc kubenswrapper[4895]: E0129 08:41:47.501699 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs podName:d167bf78-4ea9-42d8-8ab6-6aaf234e102e nodeName:}" failed. No retries permitted until 2026-01-29 08:41:55.501669563 +0000 UTC m=+57.143177719 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs") pod "network-metrics-daemon-g4585" (UID: "d167bf78-4ea9-42d8-8ab6-6aaf234e102e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.549587 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.549684 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.549737 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.549767 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.549783 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:47Z","lastTransitionTime":"2026-01-29T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.652451 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.652515 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.652525 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.652550 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.652564 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:47Z","lastTransitionTime":"2026-01-29T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.755533 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.755588 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.755605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.755626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.755639 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:47Z","lastTransitionTime":"2026-01-29T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.858419 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.858471 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.858481 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.858503 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.858521 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:47Z","lastTransitionTime":"2026-01-29T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.962251 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.962306 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.962322 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.962343 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:47 crc kubenswrapper[4895]: I0129 08:41:47.962359 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:47Z","lastTransitionTime":"2026-01-29T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.065640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.065699 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.065712 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.065733 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.065752 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:48Z","lastTransitionTime":"2026-01-29T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.169420 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.169469 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.169482 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.169499 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.169514 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:48Z","lastTransitionTime":"2026-01-29T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.184898 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:04:22.929025045 +0000 UTC Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.210399 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.210752 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.210405 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:48 crc kubenswrapper[4895]: E0129 08:41:48.210977 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:48 crc kubenswrapper[4895]: E0129 08:41:48.210775 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:48 crc kubenswrapper[4895]: E0129 08:41:48.211247 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.273051 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.273111 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.273123 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.273145 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.273158 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:48Z","lastTransitionTime":"2026-01-29T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.382554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.382612 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.382624 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.382645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.382659 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:48Z","lastTransitionTime":"2026-01-29T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.485370 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.485432 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.485441 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.485462 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.485473 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:48Z","lastTransitionTime":"2026-01-29T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.589236 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.589309 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.589321 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.589342 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.589358 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:48Z","lastTransitionTime":"2026-01-29T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.692538 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.692621 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.692646 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.692677 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.692699 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:48Z","lastTransitionTime":"2026-01-29T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.796219 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.796263 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.796274 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.796290 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.796303 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:48Z","lastTransitionTime":"2026-01-29T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.900859 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.900927 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.900941 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.900986 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:48 crc kubenswrapper[4895]: I0129 08:41:48.901005 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:48Z","lastTransitionTime":"2026-01-29T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.003442 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.003478 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.003486 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.003505 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.003518 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:49Z","lastTransitionTime":"2026-01-29T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.106594 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.106637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.106646 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.106662 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.106672 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:49Z","lastTransitionTime":"2026-01-29T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.185612 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:50:33.422335997 +0000 UTC Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.210124 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.210169 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.210179 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.210198 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.210213 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.210211 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:49Z","lastTransitionTime":"2026-01-29T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:49 crc kubenswrapper[4895]: E0129 08:41:49.210377 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.227297 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.243020 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.259468 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.273909 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.285009 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.306418 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.316242 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.316292 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.316301 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.316321 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.316331 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:49Z","lastTransitionTime":"2026-01-29T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.324034 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.339253 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.351003 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.368879 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.384325 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.398619 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.412885 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.419865 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.420223 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.420492 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.420777 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.420959 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:49Z","lastTransitionTime":"2026-01-29T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.430572 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.454763 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.256935 6094 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.257094 6094 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257191 6094 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257251 6094 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257329 6094 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:37.257878 6094 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 08:41:37.257963 6094 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 08:41:37.258045 6094 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 08:41:37.258105 6094 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 08:41:37.258055 6094 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"message\\\":\\\"12 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 08:41:40.149840 6312 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:40.150169 6312 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:40.150472 6312 factory.go:656] Stopping watch factory\\\\nI0129 08:41:40.150503 6312 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:40.150822 6312 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:40.176244 6312 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 08:41:40.176298 6312 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 08:41:40.176415 6312 ovnkube.go:599] Stopped ovnkube\\\\nI0129 08:41:40.176477 6312 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 08:41:40.176639 6312 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.466475 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:49Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.524153 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.524210 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.524224 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.524244 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.524257 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:49Z","lastTransitionTime":"2026-01-29T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.627358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.627432 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.627445 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.627512 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.627528 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:49Z","lastTransitionTime":"2026-01-29T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.729714 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.729756 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.729765 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.729781 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.729789 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:49Z","lastTransitionTime":"2026-01-29T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.833023 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.833063 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.833074 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.833091 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.833102 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:49Z","lastTransitionTime":"2026-01-29T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.936188 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.936228 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.936237 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.936251 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:49 crc kubenswrapper[4895]: I0129 08:41:49.936262 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:49Z","lastTransitionTime":"2026-01-29T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.006767 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.020604 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.029764 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.038480 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.038517 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.038528 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.038549 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.038562 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:50Z","lastTransitionTime":"2026-01-29T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.052158 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.069766 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.090163 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.105964 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.117207 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.133094 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.141528 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.141581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.141594 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.141617 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.141628 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:50Z","lastTransitionTime":"2026-01-29T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.152094 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.169015 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.186344 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:48:02.697350061 +0000 UTC Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.186469 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.201982 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.210629 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.210657 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.210629 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:50 crc kubenswrapper[4895]: E0129 08:41:50.210774 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:50 crc kubenswrapper[4895]: E0129 08:41:50.210872 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:50 crc kubenswrapper[4895]: E0129 08:41:50.210907 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.217628 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.229761 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.243967 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.245121 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.245296 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.245473 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.245614 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.245765 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:50Z","lastTransitionTime":"2026-01-29T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.264804 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25efe9b022863a0bf60d1995adfd9bd8071f654153f6fa9451bf0d5df82f549c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"message\\\":\\\") from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.256935 6094 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:37.257094 6094 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257191 6094 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257251 6094 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 08:41:37.257329 6094 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:37.257878 6094 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 08:41:37.257963 6094 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 08:41:37.258045 6094 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 08:41:37.258105 6094 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 08:41:37.258055 6094 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"message\\\":\\\"12 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 08:41:40.149840 6312 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:40.150169 6312 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:40.150472 6312 factory.go:656] Stopping watch factory\\\\nI0129 08:41:40.150503 6312 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:40.150822 6312 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:40.176244 6312 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 08:41:40.176298 6312 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 08:41:40.176415 6312 ovnkube.go:599] Stopped ovnkube\\\\nI0129 08:41:40.176477 6312 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 08:41:40.176639 6312 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.277903 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:50Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.348142 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.348176 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.348184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.348220 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.348233 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:50Z","lastTransitionTime":"2026-01-29T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.450348 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.450387 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.450401 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.450418 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.450430 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:50Z","lastTransitionTime":"2026-01-29T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.553516 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.553571 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.553589 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.553612 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.553630 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:50Z","lastTransitionTime":"2026-01-29T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.656345 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.656388 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.656400 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.656420 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.656434 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:50Z","lastTransitionTime":"2026-01-29T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.759786 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.759849 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.759862 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.759885 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.759906 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:50Z","lastTransitionTime":"2026-01-29T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.862743 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.862787 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.862799 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.862817 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.862830 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:50Z","lastTransitionTime":"2026-01-29T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.966420 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.966481 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.966497 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.966519 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:50 crc kubenswrapper[4895]: I0129 08:41:50.966533 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:50Z","lastTransitionTime":"2026-01-29T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.069472 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.069541 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.069562 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.069590 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.069608 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:51Z","lastTransitionTime":"2026-01-29T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.127975 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.128981 4895 scope.go:117] "RemoveContainer" containerID="67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.140086 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.157591 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.172826 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.172880 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.172893 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.172943 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.172959 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:51Z","lastTransitionTime":"2026-01-29T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.173636 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.186894 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:09:29.410194755 +0000 UTC Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.187325 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ce52dc-df05-45bb-9f9c-e0512bbf8b4b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c069173f445cfb105a3b28f49ce9bf7793ee448913a03b8b6d1ddbef91daee04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9854afcfb65e0bf01d43a3c9135ec03936b53449edd9a024f857fea2ab013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd7a8365fe1196984119afbb2bd6f61289bb5396b77f733275c1e275519b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.203496 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.211139 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:51 crc kubenswrapper[4895]: E0129 08:41:51.211303 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.218693 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.230639 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.240751 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.255062 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.273899 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"message\\\":\\\"12 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 08:41:40.149840 6312 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:40.150169 6312 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:40.150472 6312 factory.go:656] Stopping watch factory\\\\nI0129 08:41:40.150503 6312 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:40.150822 6312 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:40.176244 6312 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 08:41:40.176298 6312 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 08:41:40.176415 6312 ovnkube.go:599] Stopped ovnkube\\\\nI0129 08:41:40.176477 6312 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 08:41:40.176639 6312 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.275188 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.275221 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.275232 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.275249 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.275258 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:51Z","lastTransitionTime":"2026-01-29T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.284621 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.297305 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.308796 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.319861 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.332450 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.345540 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.356625 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.378196 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.378236 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.378245 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.378262 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.378271 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:51Z","lastTransitionTime":"2026-01-29T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.480635 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.480684 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.480698 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.480715 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.480727 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:51Z","lastTransitionTime":"2026-01-29T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.545786 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/1.log" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.548093 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerStarted","Data":"7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3"} Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.548545 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.564327 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.584263 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.584301 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.584312 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.584331 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.584344 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:51Z","lastTransitionTime":"2026-01-29T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.585144 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.598199 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.615844 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.640510 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.664437 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.685790 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ce52dc-df05-45bb-9f9c-e0512bbf8b4b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c069173f445cfb105a3b28f49ce9bf7793ee448913a03b8b6d1ddbef91daee04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9854afcfb65e0bf01d43a3c9135ec03936b53449edd9a024f857fea2ab013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd7a8365fe1196984119afbb2bd6f61289bb5396b77f733275c1e275519b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.686722 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.686755 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.686771 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.686804 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.686817 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:51Z","lastTransitionTime":"2026-01-29T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.697907 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.718227 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.731971 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.746888 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.764018 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.785424 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"message\\\":\\\"12 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 08:41:40.149840 6312 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:40.150169 6312 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:40.150472 6312 factory.go:656] Stopping watch factory\\\\nI0129 08:41:40.150503 6312 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:40.150822 6312 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:40.176244 6312 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 08:41:40.176298 6312 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 08:41:40.176415 6312 ovnkube.go:599] Stopped ovnkube\\\\nI0129 08:41:40.176477 6312 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 08:41:40.176639 6312 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.789459 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.789495 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.789506 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.789523 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.789535 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:51Z","lastTransitionTime":"2026-01-29T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.805153 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.820715 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.836356 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.855018 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:51Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.891703 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.891752 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.891762 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.891778 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.891788 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:51Z","lastTransitionTime":"2026-01-29T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.957244 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.957424 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:51 crc kubenswrapper[4895]: E0129 08:41:51.957526 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:42:23.957467802 +0000 UTC m=+85.598975948 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:41:51 crc kubenswrapper[4895]: E0129 08:41:51.957540 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.957603 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:51 crc kubenswrapper[4895]: E0129 08:41:51.957633 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:42:23.957617385 +0000 UTC m=+85.599125731 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:41:51 crc kubenswrapper[4895]: E0129 08:41:51.957659 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:41:51 crc kubenswrapper[4895]: E0129 08:41:51.957725 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:42:23.957706817 +0000 UTC m=+85.599214973 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.994481 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.994567 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.994601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.994620 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:51 crc kubenswrapper[4895]: I0129 08:41:51.994637 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:51Z","lastTransitionTime":"2026-01-29T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.059033 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.059103 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:52 crc kubenswrapper[4895]: E0129 08:41:52.059281 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:41:52 crc kubenswrapper[4895]: E0129 08:41:52.059304 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:41:52 crc kubenswrapper[4895]: E0129 08:41:52.059318 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:52 crc kubenswrapper[4895]: E0129 08:41:52.059367 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:41:52 crc kubenswrapper[4895]: E0129 08:41:52.059393 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:42:24.059372073 +0000 UTC m=+85.700880229 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:52 crc kubenswrapper[4895]: E0129 08:41:52.059409 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:41:52 crc kubenswrapper[4895]: E0129 08:41:52.059472 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:52 crc kubenswrapper[4895]: E0129 08:41:52.059571 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:42:24.059545427 +0000 UTC m=+85.701053573 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.097289 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.097341 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.097350 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.097368 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.097379 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:52Z","lastTransitionTime":"2026-01-29T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.188292 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:27:11.277441781 +0000 UTC Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.200339 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.200378 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.200389 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.200409 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.200422 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:52Z","lastTransitionTime":"2026-01-29T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.210513 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.210513 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.210743 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:52 crc kubenswrapper[4895]: E0129 08:41:52.210645 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:52 crc kubenswrapper[4895]: E0129 08:41:52.210907 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:52 crc kubenswrapper[4895]: E0129 08:41:52.211011 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.304489 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.304572 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.304587 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.304615 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.304632 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:52Z","lastTransitionTime":"2026-01-29T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.407908 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.407982 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.407994 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.408013 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.408027 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:52Z","lastTransitionTime":"2026-01-29T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.510464 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.510520 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.510530 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.510552 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.510577 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:52Z","lastTransitionTime":"2026-01-29T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.554811 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/2.log" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.555441 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/1.log" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.558469 4895 generic.go:334] "Generic (PLEG): container finished" podID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerID="7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3" exitCode=1 Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.558511 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerDied","Data":"7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3"} Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.558567 4895 scope.go:117] "RemoveContainer" containerID="67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.559278 4895 scope.go:117] "RemoveContainer" containerID="7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3" Jan 29 08:41:52 crc kubenswrapper[4895]: E0129 08:41:52.559436 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.572496 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.583526 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.602448 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.613557 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.613650 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.613667 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.613686 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.613726 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:52Z","lastTransitionTime":"2026-01-29T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.624134 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67d5c4abcdd5b0d65bbf017ce1afdbbb36fc9c87ed47075939d5391487a6fe67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"message\\\":\\\"12 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 08:41:40.149840 6312 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:40.150169 6312 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:40.150472 6312 factory.go:656] Stopping watch factory\\\\nI0129 08:41:40.150503 6312 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 08:41:40.150822 6312 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 08:41:40.176244 6312 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 08:41:40.176298 6312 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 08:41:40.176415 6312 ovnkube.go:599] Stopped ovnkube\\\\nI0129 08:41:40.176477 6312 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 08:41:40.176639 6312 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:52Z\\\",\\\"message\\\":\\\"08:41:52.069211 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-z82hk\\\\nI0129 08:41:52.069212 6510 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-q9lpx in node crc\\\\nI0129 08:41:52.069225 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-q9lpx after 0 failed attempt(s)\\\\nI0129 08:41:52.069218 6510 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-z82hk in node crc\\\\nI0129 08:41:52.069018 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069267 6510 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0129 08:41:52.069273 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0129 08:41:52.069277 6510 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069230 6510 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-q9lpx\\\\nI0129 08:41:52.069064 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.637314 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.650061 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.665040 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.680852 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.694446 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.707817 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.720710 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.738197 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.746159 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.746208 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.746220 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.746239 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.746255 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:52Z","lastTransitionTime":"2026-01-29T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.751300 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.767863 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.781539 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.798416 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ce52dc-df05-45bb-9f9c-e0512bbf8b4b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c069173f445cfb105a3b28f49ce9bf7793ee448913a03b8b6d1ddbef91daee04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9854afcfb65e0bf01d43a3c9135ec03936b53449edd9a024f857fea2ab013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd7a8365fe1196984119afbb2bd6f61289bb5396b77f733275c1e275519b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.815246 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:52Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.849081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.849113 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.849121 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.849137 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.849146 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:52Z","lastTransitionTime":"2026-01-29T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.951718 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.951819 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.951835 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.951856 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:52 crc kubenswrapper[4895]: I0129 08:41:52.951868 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:52Z","lastTransitionTime":"2026-01-29T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.054416 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.054482 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.054497 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.054519 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.054534 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:53Z","lastTransitionTime":"2026-01-29T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.157328 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.157392 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.157404 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.157427 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.157439 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:53Z","lastTransitionTime":"2026-01-29T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.188790 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:51:17.912782587 +0000 UTC Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.210478 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:53 crc kubenswrapper[4895]: E0129 08:41:53.210677 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.260311 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.260360 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.260373 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.260392 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.260406 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:53Z","lastTransitionTime":"2026-01-29T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.362975 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.363017 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.363027 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.363047 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.363059 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:53Z","lastTransitionTime":"2026-01-29T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.466019 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.466072 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.466084 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.466105 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.466117 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:53Z","lastTransitionTime":"2026-01-29T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.563613 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/2.log" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.567765 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.567810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.567825 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.567842 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.567854 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:53Z","lastTransitionTime":"2026-01-29T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.568047 4895 scope.go:117] "RemoveContainer" containerID="7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3" Jan 29 08:41:53 crc kubenswrapper[4895]: E0129 08:41:53.568230 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.585551 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.603446 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.618026 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.634824 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.649116 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.664450 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.670618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.670751 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.670768 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.670795 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.670804 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:53Z","lastTransitionTime":"2026-01-29T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.681870 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.696647 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.711534 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ce52dc-df05-45bb-9f9c-e0512bbf8b4b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c069173f445cfb105a3b28f49ce9bf7793ee448913a03b8b6d1ddbef91daee04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9854afcfb65e0bf01d43a3c9135ec03936b53449edd9a024f857fea2ab013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd7a8365fe1196984119afbb2bd6f61289bb5396b77f733275c1e275519b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.725777 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.740600 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.752344 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.766190 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.773654 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.773705 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.773740 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.773761 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.773773 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:53Z","lastTransitionTime":"2026-01-29T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.782818 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.805726 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:52Z\\\",\\\"message\\\":\\\"08:41:52.069211 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-z82hk\\\\nI0129 08:41:52.069212 6510 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-q9lpx in node crc\\\\nI0129 08:41:52.069225 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-q9lpx after 0 failed attempt(s)\\\\nI0129 08:41:52.069218 6510 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-z82hk in node crc\\\\nI0129 08:41:52.069018 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069267 6510 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0129 08:41:52.069273 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0129 08:41:52.069277 6510 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069230 6510 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-q9lpx\\\\nI0129 08:41:52.069064 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.818687 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.832668 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:53Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.876985 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.877044 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.877057 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.877081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.877095 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:53Z","lastTransitionTime":"2026-01-29T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.979824 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.979902 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.979951 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.979977 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:53 crc kubenswrapper[4895]: I0129 08:41:53.979996 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:53Z","lastTransitionTime":"2026-01-29T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.082597 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.082637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.082648 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.082669 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.082682 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:54Z","lastTransitionTime":"2026-01-29T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.185709 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.185754 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.185766 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.185788 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.185801 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:54Z","lastTransitionTime":"2026-01-29T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.189910 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:21:20.092097105 +0000 UTC Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.210381 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.210523 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.210620 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:54 crc kubenswrapper[4895]: E0129 08:41:54.210617 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:54 crc kubenswrapper[4895]: E0129 08:41:54.210773 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:54 crc kubenswrapper[4895]: E0129 08:41:54.210829 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.290003 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.290053 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.290063 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.290081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.290090 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:54Z","lastTransitionTime":"2026-01-29T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.393241 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.393637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.393708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.393779 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.393837 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:54Z","lastTransitionTime":"2026-01-29T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.496637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.496724 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.496733 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.496748 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.496758 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:54Z","lastTransitionTime":"2026-01-29T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.599972 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.600034 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.600046 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.600067 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.600077 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:54Z","lastTransitionTime":"2026-01-29T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.703227 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.703266 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.703280 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.703298 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.703309 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:54Z","lastTransitionTime":"2026-01-29T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.806027 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.806074 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.806086 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.806104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.806116 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:54Z","lastTransitionTime":"2026-01-29T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.909756 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.909800 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.909808 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.909825 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:54 crc kubenswrapper[4895]: I0129 08:41:54.909839 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:54Z","lastTransitionTime":"2026-01-29T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.013081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.013125 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.013133 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.013152 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.013197 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:55Z","lastTransitionTime":"2026-01-29T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.117013 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.117059 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.117073 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.117094 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.117108 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:55Z","lastTransitionTime":"2026-01-29T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.190813 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:35:33.543517446 +0000 UTC Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.210558 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:55 crc kubenswrapper[4895]: E0129 08:41:55.210768 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.218971 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.219021 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.219039 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.219104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.219124 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:55Z","lastTransitionTime":"2026-01-29T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.321731 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.321768 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.321776 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.321792 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.321801 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:55Z","lastTransitionTime":"2026-01-29T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.424361 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.424396 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.424406 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.424422 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.424432 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:55Z","lastTransitionTime":"2026-01-29T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.527796 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.527845 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.527864 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.527888 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.527908 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:55Z","lastTransitionTime":"2026-01-29T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.599608 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs\") pod \"network-metrics-daemon-g4585\" (UID: \"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\") " pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:55 crc kubenswrapper[4895]: E0129 08:41:55.599752 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:41:55 crc kubenswrapper[4895]: E0129 08:41:55.599811 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs podName:d167bf78-4ea9-42d8-8ab6-6aaf234e102e nodeName:}" failed. No retries permitted until 2026-01-29 08:42:11.599792962 +0000 UTC m=+73.241301108 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs") pod "network-metrics-daemon-g4585" (UID: "d167bf78-4ea9-42d8-8ab6-6aaf234e102e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.630400 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.630645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.630707 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.630767 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.630829 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:55Z","lastTransitionTime":"2026-01-29T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.733157 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.733476 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.733658 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.733968 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.734139 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:55Z","lastTransitionTime":"2026-01-29T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.836547 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.836612 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.836628 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.836651 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.836665 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:55Z","lastTransitionTime":"2026-01-29T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.938983 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.939027 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.939045 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.939064 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:55 crc kubenswrapper[4895]: I0129 08:41:55.939077 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:55Z","lastTransitionTime":"2026-01-29T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.042893 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.042955 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.042968 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.042989 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.043002 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:56Z","lastTransitionTime":"2026-01-29T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.145966 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.146346 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.146442 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.146558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.146632 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:56Z","lastTransitionTime":"2026-01-29T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.191685 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:58:44.968924655 +0000 UTC Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.210273 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.210347 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:56 crc kubenswrapper[4895]: E0129 08:41:56.210756 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.210394 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:56 crc kubenswrapper[4895]: E0129 08:41:56.211037 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:56 crc kubenswrapper[4895]: E0129 08:41:56.210820 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.249698 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.249753 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.249764 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.249781 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.249797 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:56Z","lastTransitionTime":"2026-01-29T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.352651 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.352695 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.352705 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.352721 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.352730 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:56Z","lastTransitionTime":"2026-01-29T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.455414 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.455466 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.455478 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.455497 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.455509 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:56Z","lastTransitionTime":"2026-01-29T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.557902 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.558005 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.558022 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.558049 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.558066 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:56Z","lastTransitionTime":"2026-01-29T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.660578 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.660626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.660637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.660654 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.660667 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:56Z","lastTransitionTime":"2026-01-29T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.738483 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.738826 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.738908 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.739015 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.739117 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:56Z","lastTransitionTime":"2026-01-29T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:56 crc kubenswrapper[4895]: E0129 08:41:56.761014 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:56Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.765849 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.766049 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.766118 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.766191 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.766256 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:56Z","lastTransitionTime":"2026-01-29T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:56 crc kubenswrapper[4895]: E0129 08:41:56.787063 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:56Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.792430 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.792489 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.792499 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.792518 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.792528 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:56Z","lastTransitionTime":"2026-01-29T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:56 crc kubenswrapper[4895]: E0129 08:41:56.811482 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:56Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.816108 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.816159 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.816168 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.816184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.816194 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:56Z","lastTransitionTime":"2026-01-29T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:56 crc kubenswrapper[4895]: E0129 08:41:56.828993 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:56Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.833857 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.833934 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.833952 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.833979 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.833997 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:56Z","lastTransitionTime":"2026-01-29T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:56 crc kubenswrapper[4895]: E0129 08:41:56.849880 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:56Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:56 crc kubenswrapper[4895]: E0129 08:41:56.850019 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.851664 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.851691 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.851702 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.851721 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.851734 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:56Z","lastTransitionTime":"2026-01-29T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.955499 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.955802 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.955892 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.956004 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:56 crc kubenswrapper[4895]: I0129 08:41:56.956099 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:56Z","lastTransitionTime":"2026-01-29T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.058689 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.058736 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.058748 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.058769 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.058779 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:57Z","lastTransitionTime":"2026-01-29T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.162259 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.162302 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.162315 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.162334 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.162346 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:57Z","lastTransitionTime":"2026-01-29T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.193497 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:55:57.484724364 +0000 UTC Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.211045 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:57 crc kubenswrapper[4895]: E0129 08:41:57.211229 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.265362 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.265431 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.265448 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.265469 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.265481 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:57Z","lastTransitionTime":"2026-01-29T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.368598 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.368660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.368674 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.368697 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.368714 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:57Z","lastTransitionTime":"2026-01-29T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.471577 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.471630 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.471646 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.471668 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.471679 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:57Z","lastTransitionTime":"2026-01-29T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.574884 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.574964 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.574976 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.574997 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.575011 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:57Z","lastTransitionTime":"2026-01-29T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.677583 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.677645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.677664 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.677744 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.677768 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:57Z","lastTransitionTime":"2026-01-29T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.780583 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.780634 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.780645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.780666 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.780679 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:57Z","lastTransitionTime":"2026-01-29T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.883517 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.883550 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.883559 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.883578 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.883590 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:57Z","lastTransitionTime":"2026-01-29T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.987246 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.987293 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.987301 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.987320 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:57 crc kubenswrapper[4895]: I0129 08:41:57.987329 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:57Z","lastTransitionTime":"2026-01-29T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.090084 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.090154 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.090166 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.090187 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.090200 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:58Z","lastTransitionTime":"2026-01-29T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.192967 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.193024 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.193038 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.193063 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.193089 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:58Z","lastTransitionTime":"2026-01-29T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.194017 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:10:20.920038034 +0000 UTC Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.210304 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.210409 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.210454 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:41:58 crc kubenswrapper[4895]: E0129 08:41:58.210568 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:41:58 crc kubenswrapper[4895]: E0129 08:41:58.210721 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:41:58 crc kubenswrapper[4895]: E0129 08:41:58.210820 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.295954 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.296002 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.296016 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.296032 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.296044 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:58Z","lastTransitionTime":"2026-01-29T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.399062 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.399366 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.399491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.399599 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.399685 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:58Z","lastTransitionTime":"2026-01-29T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.502069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.502129 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.502142 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.502168 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.502183 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:58Z","lastTransitionTime":"2026-01-29T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.604465 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.604500 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.604508 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.604523 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.604532 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:58Z","lastTransitionTime":"2026-01-29T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.706909 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.706966 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.706976 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.706992 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.707001 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:58Z","lastTransitionTime":"2026-01-29T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.809568 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.809631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.809649 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.809672 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.809687 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:58Z","lastTransitionTime":"2026-01-29T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.918554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.918605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.918619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.918640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:58 crc kubenswrapper[4895]: I0129 08:41:58.918651 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:58Z","lastTransitionTime":"2026-01-29T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.022119 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.022170 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.022182 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.022201 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.022215 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:59Z","lastTransitionTime":"2026-01-29T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.125318 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.125380 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.125402 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.125429 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.125446 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:59Z","lastTransitionTime":"2026-01-29T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.194935 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:38:52.488507129 +0000 UTC Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.210384 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:41:59 crc kubenswrapper[4895]: E0129 08:41:59.210535 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.226318 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.228083 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.228131 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.228141 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.228161 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.228170 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:59Z","lastTransitionTime":"2026-01-29T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.242911 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.270433 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.292063 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.319600 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.330558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.330604 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.330622 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.330642 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.330655 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:59Z","lastTransitionTime":"2026-01-29T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.331861 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.343940 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ce52dc-df05-45bb-9f9c-e0512bbf8b4b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c069173f445cfb105a3b28f49ce9bf7793ee448913a03b8b6d1ddbef91daee04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9854afcfb65e0bf01d43a3c9135ec03936b53449edd9a024f857fea2ab013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd7a8365fe1196984119afbb2bd6f61289bb5396b77f733275c1e275519b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.356799 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.370483 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.381679 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.394606 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.408248 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.421974 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.433643 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.433908 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.434022 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.434089 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.434145 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:59Z","lastTransitionTime":"2026-01-29T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.435274 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.449248 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.466276 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.494336 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:52Z\\\",\\\"message\\\":\\\"08:41:52.069211 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-z82hk\\\\nI0129 08:41:52.069212 6510 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-q9lpx in node crc\\\\nI0129 08:41:52.069225 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-q9lpx after 0 failed attempt(s)\\\\nI0129 08:41:52.069218 6510 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-z82hk in node crc\\\\nI0129 08:41:52.069018 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069267 6510 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0129 08:41:52.069273 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0129 08:41:52.069277 6510 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069230 6510 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-q9lpx\\\\nI0129 08:41:52.069064 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:41:59Z is after 2025-08-24T17:21:41Z" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.537368 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.537436 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.537447 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.537469 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.537484 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:59Z","lastTransitionTime":"2026-01-29T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.640324 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.640363 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.640373 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.640388 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.640398 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:59Z","lastTransitionTime":"2026-01-29T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.743326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.743907 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.743946 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.743970 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.743989 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:59Z","lastTransitionTime":"2026-01-29T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.847016 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.847096 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.847121 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.847154 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.847177 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:59Z","lastTransitionTime":"2026-01-29T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.950452 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.950506 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.950519 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.950537 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:41:59 crc kubenswrapper[4895]: I0129 08:41:59.950549 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:41:59Z","lastTransitionTime":"2026-01-29T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.053835 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.053888 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.053900 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.054137 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.054152 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:00Z","lastTransitionTime":"2026-01-29T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.157849 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.157906 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.157936 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.157957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.157968 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:00Z","lastTransitionTime":"2026-01-29T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.196536 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 21:50:07.028127529 +0000 UTC Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.210986 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.211029 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.211004 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:00 crc kubenswrapper[4895]: E0129 08:42:00.211159 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:00 crc kubenswrapper[4895]: E0129 08:42:00.211258 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:00 crc kubenswrapper[4895]: E0129 08:42:00.211344 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.260558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.260618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.260635 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.260654 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.260664 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:00Z","lastTransitionTime":"2026-01-29T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.363303 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.363357 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.363369 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.363386 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.363399 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:00Z","lastTransitionTime":"2026-01-29T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.466851 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.466903 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.466933 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.466951 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.466963 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:00Z","lastTransitionTime":"2026-01-29T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.569727 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.569786 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.569798 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.569820 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.569833 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:00Z","lastTransitionTime":"2026-01-29T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.672698 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.672747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.672757 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.672775 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.672789 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:00Z","lastTransitionTime":"2026-01-29T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.775447 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.775505 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.775518 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.775537 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.775550 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:00Z","lastTransitionTime":"2026-01-29T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.878380 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.878427 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.878441 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.878458 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.878472 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:00Z","lastTransitionTime":"2026-01-29T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.981381 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.981435 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.981446 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.981468 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:00 crc kubenswrapper[4895]: I0129 08:42:00.981479 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:00Z","lastTransitionTime":"2026-01-29T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.084010 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.084058 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.084067 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.084082 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.084091 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:01Z","lastTransitionTime":"2026-01-29T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.186393 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.186490 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.186501 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.186523 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.186543 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:01Z","lastTransitionTime":"2026-01-29T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.196695 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:38:28.109021306 +0000 UTC Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.211463 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:01 crc kubenswrapper[4895]: E0129 08:42:01.211689 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.288745 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.288784 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.288793 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.288813 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.288823 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:01Z","lastTransitionTime":"2026-01-29T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.391851 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.392227 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.392363 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.392479 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.392567 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:01Z","lastTransitionTime":"2026-01-29T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.494748 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.494805 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.494815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.494830 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.494840 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:01Z","lastTransitionTime":"2026-01-29T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.602325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.602388 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.602402 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.602422 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.602435 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:01Z","lastTransitionTime":"2026-01-29T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.704829 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.704864 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.704873 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.704890 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.704902 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:01Z","lastTransitionTime":"2026-01-29T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.807548 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.807601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.807614 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.807636 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.807648 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:01Z","lastTransitionTime":"2026-01-29T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.911163 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.911209 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.911222 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.911242 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:01 crc kubenswrapper[4895]: I0129 08:42:01.911254 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:01Z","lastTransitionTime":"2026-01-29T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.014182 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.014543 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.014639 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.014761 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.014886 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:02Z","lastTransitionTime":"2026-01-29T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.118051 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.118091 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.118100 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.118120 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.118134 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:02Z","lastTransitionTime":"2026-01-29T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.197297 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:23:49.441194161 +0000 UTC Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.210753 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.210786 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:02 crc kubenswrapper[4895]: E0129 08:42:02.211164 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:02 crc kubenswrapper[4895]: E0129 08:42:02.211337 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.210810 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:02 crc kubenswrapper[4895]: E0129 08:42:02.211462 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.220992 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.221182 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.221321 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.221455 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.221596 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:02Z","lastTransitionTime":"2026-01-29T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.325381 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.325429 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.325442 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.325463 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.325475 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:02Z","lastTransitionTime":"2026-01-29T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.428083 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.428158 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.428179 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.428208 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.428227 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:02Z","lastTransitionTime":"2026-01-29T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.531623 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.531897 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.532028 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.532126 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.532217 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:02Z","lastTransitionTime":"2026-01-29T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.635562 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.635617 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.635626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.635644 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.635656 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:02Z","lastTransitionTime":"2026-01-29T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.738534 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.738894 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.739019 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.739108 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.739263 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:02Z","lastTransitionTime":"2026-01-29T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.842967 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.843422 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.843715 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.844694 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.845027 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:02Z","lastTransitionTime":"2026-01-29T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.948851 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.948892 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.948904 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.948986 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:02 crc kubenswrapper[4895]: I0129 08:42:02.949001 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:02Z","lastTransitionTime":"2026-01-29T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.051669 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.052039 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.052168 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.052287 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.052365 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:03Z","lastTransitionTime":"2026-01-29T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.157220 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.157545 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.157627 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.157699 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.157772 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:03Z","lastTransitionTime":"2026-01-29T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.198006 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 23:56:53.063254094 +0000 UTC Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.211054 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:03 crc kubenswrapper[4895]: E0129 08:42:03.211291 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.260114 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.260161 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.260173 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.260187 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.260198 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:03Z","lastTransitionTime":"2026-01-29T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.363142 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.363175 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.363187 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.363202 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.363214 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:03Z","lastTransitionTime":"2026-01-29T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.466150 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.466211 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.466223 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.466238 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.466248 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:03Z","lastTransitionTime":"2026-01-29T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.570219 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.570294 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.570308 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.570339 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.570354 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:03Z","lastTransitionTime":"2026-01-29T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.673150 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.673200 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.673215 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.673242 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.673259 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:03Z","lastTransitionTime":"2026-01-29T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.777174 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.777230 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.777247 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.777269 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.777284 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:03Z","lastTransitionTime":"2026-01-29T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.880036 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.880094 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.880104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.880124 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.880133 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:03Z","lastTransitionTime":"2026-01-29T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.983510 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.983565 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.983580 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.983599 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:03 crc kubenswrapper[4895]: I0129 08:42:03.983612 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:03Z","lastTransitionTime":"2026-01-29T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.087059 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.087102 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.087118 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.087135 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.087150 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:04Z","lastTransitionTime":"2026-01-29T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.189573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.189620 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.189628 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.189644 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.189682 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:04Z","lastTransitionTime":"2026-01-29T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.198797 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:49:43.692554637 +0000 UTC Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.210198 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:04 crc kubenswrapper[4895]: E0129 08:42:04.210413 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.210651 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:04 crc kubenswrapper[4895]: E0129 08:42:04.210723 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.210865 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:04 crc kubenswrapper[4895]: E0129 08:42:04.210973 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.293558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.293605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.293623 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.293647 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.293665 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:04Z","lastTransitionTime":"2026-01-29T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.396447 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.396482 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.396491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.396504 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.396514 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:04Z","lastTransitionTime":"2026-01-29T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.506791 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.506877 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.506893 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.506995 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.507021 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:04Z","lastTransitionTime":"2026-01-29T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.609110 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.609138 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.609147 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.609161 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.609171 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:04Z","lastTransitionTime":"2026-01-29T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.711799 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.711840 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.711853 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.711871 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.711882 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:04Z","lastTransitionTime":"2026-01-29T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.814386 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.814433 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.814442 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.814459 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.814471 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:04Z","lastTransitionTime":"2026-01-29T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.916826 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.916884 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.916898 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.916947 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:04 crc kubenswrapper[4895]: I0129 08:42:04.916964 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:04Z","lastTransitionTime":"2026-01-29T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.019810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.019884 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.019901 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.019959 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.020018 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:05Z","lastTransitionTime":"2026-01-29T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.123532 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.123590 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.123607 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.123633 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.123654 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:05Z","lastTransitionTime":"2026-01-29T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.199960 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:10:11.348931262 +0000 UTC Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.210657 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:05 crc kubenswrapper[4895]: E0129 08:42:05.210858 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.226081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.226119 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.226140 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.226162 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.226174 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:05Z","lastTransitionTime":"2026-01-29T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.329163 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.329216 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.329227 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.329249 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.329264 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:05Z","lastTransitionTime":"2026-01-29T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.431751 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.431789 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.431801 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.431815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.431826 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:05Z","lastTransitionTime":"2026-01-29T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.535269 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.535307 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.535316 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.535333 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.535344 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:05Z","lastTransitionTime":"2026-01-29T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.637794 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.637839 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.637851 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.637873 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.637887 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:05Z","lastTransitionTime":"2026-01-29T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.740642 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.740705 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.740717 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.740736 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.740748 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:05Z","lastTransitionTime":"2026-01-29T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.843731 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.843775 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.843784 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.843801 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.843811 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:05Z","lastTransitionTime":"2026-01-29T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.946221 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.946271 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.946306 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.946553 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:05 crc kubenswrapper[4895]: I0129 08:42:05.946570 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:05Z","lastTransitionTime":"2026-01-29T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.053173 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.053222 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.053234 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.053253 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.053267 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:06Z","lastTransitionTime":"2026-01-29T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.155619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.155660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.155672 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.155689 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.155701 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:06Z","lastTransitionTime":"2026-01-29T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.200990 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 02:53:02.953575864 +0000 UTC Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.210483 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.210583 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.210642 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:06 crc kubenswrapper[4895]: E0129 08:42:06.210731 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:06 crc kubenswrapper[4895]: E0129 08:42:06.210863 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:06 crc kubenswrapper[4895]: E0129 08:42:06.210969 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.258570 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.258626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.258637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.258653 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.258663 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:06Z","lastTransitionTime":"2026-01-29T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.362166 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.362241 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.362253 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.362273 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.362286 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:06Z","lastTransitionTime":"2026-01-29T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.465421 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.465470 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.465479 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.465497 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.465511 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:06Z","lastTransitionTime":"2026-01-29T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.567687 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.567737 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.567747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.567766 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.567776 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:06Z","lastTransitionTime":"2026-01-29T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.670283 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.670333 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.670342 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.670360 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.670370 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:06Z","lastTransitionTime":"2026-01-29T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.773976 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.774020 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.774049 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.774065 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.774075 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:06Z","lastTransitionTime":"2026-01-29T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.876482 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.876535 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.876546 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.876566 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.876578 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:06Z","lastTransitionTime":"2026-01-29T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.978960 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.979000 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.979009 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.979026 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:06 crc kubenswrapper[4895]: I0129 08:42:06.979039 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:06Z","lastTransitionTime":"2026-01-29T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.081911 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.082006 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.082018 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.082039 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.082051 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:07Z","lastTransitionTime":"2026-01-29T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.184319 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.184371 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.184381 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.184400 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.184412 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:07Z","lastTransitionTime":"2026-01-29T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.201784 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:43:28.970063787 +0000 UTC Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.209210 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.209260 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.209269 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.209285 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.209306 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:07Z","lastTransitionTime":"2026-01-29T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.210565 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:07 crc kubenswrapper[4895]: E0129 08:42:07.210751 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:07 crc kubenswrapper[4895]: E0129 08:42:07.224133 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.228712 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.228752 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.228761 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.228779 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.228791 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:07Z","lastTransitionTime":"2026-01-29T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:07 crc kubenswrapper[4895]: E0129 08:42:07.242101 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.246268 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.246324 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.246336 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.246358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.246371 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:07Z","lastTransitionTime":"2026-01-29T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:07 crc kubenswrapper[4895]: E0129 08:42:07.261799 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.266228 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.266263 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.266275 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.266295 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.266311 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:07Z","lastTransitionTime":"2026-01-29T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:07 crc kubenswrapper[4895]: E0129 08:42:07.279072 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.283260 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.283290 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.283299 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.283315 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.283325 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:07Z","lastTransitionTime":"2026-01-29T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:07 crc kubenswrapper[4895]: E0129 08:42:07.298088 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:07Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:07 crc kubenswrapper[4895]: E0129 08:42:07.298255 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.299976 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.300011 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.300022 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.300045 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.300059 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:07Z","lastTransitionTime":"2026-01-29T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.403293 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.403359 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.403374 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.403399 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.403419 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:07Z","lastTransitionTime":"2026-01-29T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.506009 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.506078 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.506089 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.506104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.506113 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:07Z","lastTransitionTime":"2026-01-29T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.608300 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.608334 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.608345 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.608363 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.608376 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:07Z","lastTransitionTime":"2026-01-29T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.711013 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.711059 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.711069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.711105 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.711115 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:07Z","lastTransitionTime":"2026-01-29T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.813349 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.813419 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.813435 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.813457 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.813470 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:07Z","lastTransitionTime":"2026-01-29T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.915792 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.915855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.915864 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.915883 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:07 crc kubenswrapper[4895]: I0129 08:42:07.915894 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:07Z","lastTransitionTime":"2026-01-29T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.019462 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.019521 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.019536 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.019564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.019585 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:08Z","lastTransitionTime":"2026-01-29T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.122284 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.122330 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.122341 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.122359 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.122371 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:08Z","lastTransitionTime":"2026-01-29T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.202520 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:34:32.523498675 +0000 UTC Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.210849 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.210932 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.210958 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:08 crc kubenswrapper[4895]: E0129 08:42:08.211035 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:08 crc kubenswrapper[4895]: E0129 08:42:08.211127 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:08 crc kubenswrapper[4895]: E0129 08:42:08.211274 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.211976 4895 scope.go:117] "RemoveContainer" containerID="7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3" Jan 29 08:42:08 crc kubenswrapper[4895]: E0129 08:42:08.212120 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.224343 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.224393 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.224406 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.224425 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.224436 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:08Z","lastTransitionTime":"2026-01-29T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.327225 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.327275 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.327289 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.327316 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.327329 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:08Z","lastTransitionTime":"2026-01-29T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.430495 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.430541 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.430551 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.430573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.430584 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:08Z","lastTransitionTime":"2026-01-29T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.533258 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.533298 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.533307 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.533326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.533339 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:08Z","lastTransitionTime":"2026-01-29T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.635427 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.635543 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.635563 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.635587 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.635625 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:08Z","lastTransitionTime":"2026-01-29T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.738257 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.738304 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.738315 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.738332 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.738344 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:08Z","lastTransitionTime":"2026-01-29T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.840634 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.840689 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.840699 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.840718 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.840734 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:08Z","lastTransitionTime":"2026-01-29T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.944774 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.944872 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.944980 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.945028 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:08 crc kubenswrapper[4895]: I0129 08:42:08.945059 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:08Z","lastTransitionTime":"2026-01-29T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.047810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.047852 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.047861 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.047878 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.047887 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:09Z","lastTransitionTime":"2026-01-29T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.151244 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.151287 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.151296 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.151312 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.151322 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:09Z","lastTransitionTime":"2026-01-29T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.202764 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:49:26.967402758 +0000 UTC Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.210368 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:09 crc kubenswrapper[4895]: E0129 08:42:09.210596 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.227511 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.239418 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.254277 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.254338 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.254351 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.254370 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.254385 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:09Z","lastTransitionTime":"2026-01-29T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.256735 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.276906 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:52Z\\\",\\\"message\\\":\\\"08:41:52.069211 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-z82hk\\\\nI0129 08:41:52.069212 6510 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-q9lpx in node crc\\\\nI0129 08:41:52.069225 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-q9lpx after 0 failed attempt(s)\\\\nI0129 08:41:52.069218 6510 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-z82hk in node crc\\\\nI0129 08:41:52.069018 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069267 6510 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0129 08:41:52.069273 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0129 08:41:52.069277 6510 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069230 6510 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-q9lpx\\\\nI0129 08:41:52.069064 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.290336 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.305648 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.320073 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.333756 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.346945 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.356618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.356702 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.356716 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.356736 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.356748 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:09Z","lastTransitionTime":"2026-01-29T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.359078 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.372430 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.388464 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.402883 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.419462 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.436603 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.450575 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ce52dc-df05-45bb-9f9c-e0512bbf8b4b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c069173f445cfb105a3b28f49ce9bf7793ee448913a03b8b6d1ddbef91daee04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9854afcfb65e0bf01d43a3c9135ec03936b53449edd9a024f857fea2ab013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd7a8365fe1196984119afbb2bd6f61289bb5396b77f733275c1e275519b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.461103 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.461178 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.461195 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.461214 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.461231 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:09Z","lastTransitionTime":"2026-01-29T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.472314 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:09Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.563459 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.563490 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.563501 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.563519 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.563529 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:09Z","lastTransitionTime":"2026-01-29T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.666208 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.666260 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.666275 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.666295 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.666307 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:09Z","lastTransitionTime":"2026-01-29T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.769394 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.769434 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.769445 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.769461 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.769473 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:09Z","lastTransitionTime":"2026-01-29T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.873177 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.873249 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.873262 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.873285 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.873300 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:09Z","lastTransitionTime":"2026-01-29T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.975349 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.975393 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.975406 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.975425 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:09 crc kubenswrapper[4895]: I0129 08:42:09.975437 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:09Z","lastTransitionTime":"2026-01-29T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.079011 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.079061 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.079071 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.079088 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.079097 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:10Z","lastTransitionTime":"2026-01-29T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.181882 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.181932 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.181942 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.181956 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.181967 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:10Z","lastTransitionTime":"2026-01-29T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.203429 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:49:08.953181654 +0000 UTC Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.210986 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.211010 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.211094 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:10 crc kubenswrapper[4895]: E0129 08:42:10.211159 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:10 crc kubenswrapper[4895]: E0129 08:42:10.211328 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:10 crc kubenswrapper[4895]: E0129 08:42:10.211422 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.284551 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.284595 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.284605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.284627 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.284641 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:10Z","lastTransitionTime":"2026-01-29T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.388561 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.388618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.388628 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.388648 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.388659 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:10Z","lastTransitionTime":"2026-01-29T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.491726 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.491769 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.491780 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.491799 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.491810 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:10Z","lastTransitionTime":"2026-01-29T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.594706 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.594759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.594774 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.594795 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.594809 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:10Z","lastTransitionTime":"2026-01-29T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.697791 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.697833 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.697842 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.697856 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.697869 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:10Z","lastTransitionTime":"2026-01-29T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.799905 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.799968 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.799981 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.799998 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.800008 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:10Z","lastTransitionTime":"2026-01-29T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.902741 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.902799 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.902810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.902828 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:10 crc kubenswrapper[4895]: I0129 08:42:10.902841 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:10Z","lastTransitionTime":"2026-01-29T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.005243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.005289 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.005298 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.005313 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.005322 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:11Z","lastTransitionTime":"2026-01-29T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.108028 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.108058 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.108066 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.108081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.108090 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:11Z","lastTransitionTime":"2026-01-29T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.203931 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:53:23.50810876 +0000 UTC Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.210376 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:11 crc kubenswrapper[4895]: E0129 08:42:11.210525 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.210951 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.210998 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.211013 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.211032 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.211049 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:11Z","lastTransitionTime":"2026-01-29T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.313577 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.313656 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.313670 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.313690 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.313721 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:11Z","lastTransitionTime":"2026-01-29T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.416300 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.416340 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.416350 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.416369 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.416382 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:11Z","lastTransitionTime":"2026-01-29T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.518535 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.518586 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.518598 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.518614 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.518624 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:11Z","lastTransitionTime":"2026-01-29T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.621589 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.621635 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.621647 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.621664 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.621676 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:11Z","lastTransitionTime":"2026-01-29T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.687724 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs\") pod \"network-metrics-daemon-g4585\" (UID: \"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\") " pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:11 crc kubenswrapper[4895]: E0129 08:42:11.687996 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:42:11 crc kubenswrapper[4895]: E0129 08:42:11.688110 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs podName:d167bf78-4ea9-42d8-8ab6-6aaf234e102e nodeName:}" failed. No retries permitted until 2026-01-29 08:42:43.688083586 +0000 UTC m=+105.329591732 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs") pod "network-metrics-daemon-g4585" (UID: "d167bf78-4ea9-42d8-8ab6-6aaf234e102e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.724406 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.724475 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.724492 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.724519 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.724536 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:11Z","lastTransitionTime":"2026-01-29T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.827695 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.827769 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.827790 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.827814 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.827827 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:11Z","lastTransitionTime":"2026-01-29T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.930066 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.930181 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.930192 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.930210 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:11 crc kubenswrapper[4895]: I0129 08:42:11.930222 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:11Z","lastTransitionTime":"2026-01-29T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.033151 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.033202 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.033214 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.033233 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.033245 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:12Z","lastTransitionTime":"2026-01-29T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.135745 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.135792 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.135801 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.135818 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.135829 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:12Z","lastTransitionTime":"2026-01-29T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.204494 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:04:26.581593185 +0000 UTC Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.210856 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.210863 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:12 crc kubenswrapper[4895]: E0129 08:42:12.211070 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.210862 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:12 crc kubenswrapper[4895]: E0129 08:42:12.211153 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:12 crc kubenswrapper[4895]: E0129 08:42:12.211185 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.239012 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.239064 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.239076 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.239094 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.239106 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:12Z","lastTransitionTime":"2026-01-29T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.341865 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.341933 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.341949 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.341967 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.341978 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:12Z","lastTransitionTime":"2026-01-29T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.446081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.446131 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.446140 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.446157 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.446167 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:12Z","lastTransitionTime":"2026-01-29T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.549595 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.549648 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.549659 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.549682 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.549695 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:12Z","lastTransitionTime":"2026-01-29T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.652624 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.652716 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.652729 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.652748 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.652758 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:12Z","lastTransitionTime":"2026-01-29T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.756184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.756241 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.756250 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.756268 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.756341 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:12Z","lastTransitionTime":"2026-01-29T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.858842 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.858908 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.858936 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.858957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.858969 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:12Z","lastTransitionTime":"2026-01-29T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.963784 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.963836 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.963849 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.963869 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:12 crc kubenswrapper[4895]: I0129 08:42:12.963882 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:12Z","lastTransitionTime":"2026-01-29T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.066363 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.066596 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.066610 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.066629 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.066646 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:13Z","lastTransitionTime":"2026-01-29T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.169023 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.169068 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.169076 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.169092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.169101 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:13Z","lastTransitionTime":"2026-01-29T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.204704 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:55:57.822356515 +0000 UTC Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.211084 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:13 crc kubenswrapper[4895]: E0129 08:42:13.211305 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.271482 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.271526 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.271538 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.271561 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.271574 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:13Z","lastTransitionTime":"2026-01-29T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.374810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.374866 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.374878 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.374895 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.374908 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:13Z","lastTransitionTime":"2026-01-29T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.477338 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.477394 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.477404 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.477424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.477436 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:13Z","lastTransitionTime":"2026-01-29T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.580049 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.580108 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.580116 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.580132 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.580162 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:13Z","lastTransitionTime":"2026-01-29T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.683253 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.683297 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.683309 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.683327 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.683339 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:13Z","lastTransitionTime":"2026-01-29T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.785497 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.785538 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.785548 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.785564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.785574 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:13Z","lastTransitionTime":"2026-01-29T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.888092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.888152 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.888169 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.888191 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.888204 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:13Z","lastTransitionTime":"2026-01-29T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.991031 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.991073 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.991082 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.991099 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:13 crc kubenswrapper[4895]: I0129 08:42:13.991111 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:13Z","lastTransitionTime":"2026-01-29T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.093681 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.093730 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.093739 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.093758 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.093769 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:14Z","lastTransitionTime":"2026-01-29T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.197251 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.197320 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.197334 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.197357 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.197374 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:14Z","lastTransitionTime":"2026-01-29T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.205680 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 15:43:45.727625417 +0000 UTC Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.211042 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.211102 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:14 crc kubenswrapper[4895]: E0129 08:42:14.211199 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.211107 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:14 crc kubenswrapper[4895]: E0129 08:42:14.211299 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:14 crc kubenswrapper[4895]: E0129 08:42:14.211452 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.299957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.300014 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.300027 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.300052 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.300064 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:14Z","lastTransitionTime":"2026-01-29T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.402503 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.402544 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.402554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.402573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.402584 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:14Z","lastTransitionTime":"2026-01-29T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.505433 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.505500 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.505517 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.505540 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.505556 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:14Z","lastTransitionTime":"2026-01-29T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.608449 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.608498 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.608510 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.608529 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.608541 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:14Z","lastTransitionTime":"2026-01-29T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.710548 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.711326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.711343 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.711363 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.711375 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:14Z","lastTransitionTime":"2026-01-29T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.814178 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.814233 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.814246 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.814271 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.814287 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:14Z","lastTransitionTime":"2026-01-29T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.917515 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.917570 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.917579 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.917605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:14 crc kubenswrapper[4895]: I0129 08:42:14.917624 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:14Z","lastTransitionTime":"2026-01-29T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.020817 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.020943 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.020963 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.020988 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.021001 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:15Z","lastTransitionTime":"2026-01-29T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.123835 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.123901 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.123972 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.124001 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.124018 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:15Z","lastTransitionTime":"2026-01-29T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.206265 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:00:30.005638815 +0000 UTC Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.210693 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:15 crc kubenswrapper[4895]: E0129 08:42:15.210866 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.227038 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.227094 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.227107 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.227128 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.227144 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:15Z","lastTransitionTime":"2026-01-29T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.330280 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.330340 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.330354 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.330376 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.330389 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:15Z","lastTransitionTime":"2026-01-29T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.433550 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.433601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.433610 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.433625 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.433637 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:15Z","lastTransitionTime":"2026-01-29T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.536154 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.536204 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.536220 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.536242 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.536257 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:15Z","lastTransitionTime":"2026-01-29T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.637494 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b4dgj_69ba7dcf-e7a0-4408-983b-09a07851d01c/kube-multus/0.log" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.637564 4895 generic.go:334] "Generic (PLEG): container finished" podID="69ba7dcf-e7a0-4408-983b-09a07851d01c" containerID="1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e" exitCode=1 Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.637610 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b4dgj" event={"ID":"69ba7dcf-e7a0-4408-983b-09a07851d01c","Type":"ContainerDied","Data":"1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e"} Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.638356 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.638400 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.638441 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.638561 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.638584 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:15Z","lastTransitionTime":"2026-01-29T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.639705 4895 scope.go:117] "RemoveContainer" containerID="1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.653112 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.666579 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.682444 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.702046 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.717290 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.730215 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.741092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.741131 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.741144 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.741163 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.741174 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:15Z","lastTransitionTime":"2026-01-29T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.743856 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.762866 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.779466 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.794713 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ce52dc-df05-45bb-9f9c-e0512bbf8b4b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c069173f445cfb105a3b28f49ce9bf7793ee448913a03b8b6d1ddbef91daee04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9854afcfb65e0bf01d43a3c9135ec03936b53449edd9a024f857fea2ab013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd7a8365fe1196984119afbb2bd6f61289bb5396b77f733275c1e275519b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.811683 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.828998 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:42:14Z\\\",\\\"message\\\":\\\"2026-01-29T08:41:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0fc7b64e-5a5d-4e42-96c6-a5b15855b9aa\\\\n2026-01-29T08:41:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0fc7b64e-5a5d-4e42-96c6-a5b15855b9aa to /host/opt/cni/bin/\\\\n2026-01-29T08:41:29Z [verbose] multus-daemon started\\\\n2026-01-29T08:41:29Z [verbose] Readiness Indicator file check\\\\n2026-01-29T08:42:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.844052 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.844071 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.844094 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.844103 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.844117 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.844127 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:15Z","lastTransitionTime":"2026-01-29T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.859453 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.875714 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.897964 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:52Z\\\",\\\"message\\\":\\\"08:41:52.069211 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-z82hk\\\\nI0129 08:41:52.069212 6510 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-q9lpx in node crc\\\\nI0129 08:41:52.069225 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-q9lpx after 0 failed attempt(s)\\\\nI0129 08:41:52.069218 6510 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-z82hk in node crc\\\\nI0129 08:41:52.069018 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069267 6510 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0129 08:41:52.069273 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0129 08:41:52.069277 6510 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069230 6510 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-q9lpx\\\\nI0129 08:41:52.069064 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.912602 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:15Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.946891 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.946950 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.946960 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.946977 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:15 crc kubenswrapper[4895]: I0129 08:42:15.946988 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:15Z","lastTransitionTime":"2026-01-29T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.049490 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.049522 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.049529 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.049543 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.049552 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:16Z","lastTransitionTime":"2026-01-29T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.152561 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.152601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.152610 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.152627 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.152636 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:16Z","lastTransitionTime":"2026-01-29T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.207083 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 10:58:01.678797163 +0000 UTC Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.210406 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.210539 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:16 crc kubenswrapper[4895]: E0129 08:42:16.210654 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.210772 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:16 crc kubenswrapper[4895]: E0129 08:42:16.210909 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:16 crc kubenswrapper[4895]: E0129 08:42:16.211024 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.255964 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.256015 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.256029 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.256047 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.256058 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:16Z","lastTransitionTime":"2026-01-29T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.358685 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.358735 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.358744 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.358759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.358770 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:16Z","lastTransitionTime":"2026-01-29T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.460855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.460897 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.460907 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.460941 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.460953 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:16Z","lastTransitionTime":"2026-01-29T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.563886 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.563996 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.564015 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.564037 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.564051 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:16Z","lastTransitionTime":"2026-01-29T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.644161 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b4dgj_69ba7dcf-e7a0-4408-983b-09a07851d01c/kube-multus/0.log" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.644302 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b4dgj" event={"ID":"69ba7dcf-e7a0-4408-983b-09a07851d01c","Type":"ContainerStarted","Data":"115f19a42723357b8ac63e4f01e8d591ea064cf2757203925018f142ddbf1336"} Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.661352 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ce52dc-df05-45bb-9f9c-e0512bbf8b4b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c069173f445cfb105a3b28f49ce9bf7793ee448913a03b8b6d1ddbef91daee04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9854afcfb65e0bf01d43a3c9135ec03936b53449edd9a024f857fea2ab013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd7a8365fe1196984119afbb2bd6f61289bb5396b77f733275c1e275519b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.666704 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.666770 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.666786 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.666806 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.666818 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:16Z","lastTransitionTime":"2026-01-29T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.679712 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.696621 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://115f19a42723357b8ac63e4f01e8d591ea064cf2757203925018f142ddbf1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:42:14Z\\\",\\\"message\\\":\\\"2026-01-29T08:41:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0fc7b64e-5a5d-4e42-96c6-a5b15855b9aa\\\\n2026-01-29T08:41:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0fc7b64e-5a5d-4e42-96c6-a5b15855b9aa to /host/opt/cni/bin/\\\\n2026-01-29T08:41:29Z [verbose] multus-daemon started\\\\n2026-01-29T08:41:29Z [verbose] Readiness Indicator file check\\\\n2026-01-29T08:42:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.712158 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.731004 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.744672 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.758017 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.769048 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.769091 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.769106 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.769125 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.769139 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:16Z","lastTransitionTime":"2026-01-29T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.771383 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.785616 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.805903 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.827261 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:52Z\\\",\\\"message\\\":\\\"08:41:52.069211 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-z82hk\\\\nI0129 08:41:52.069212 6510 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-q9lpx in node crc\\\\nI0129 08:41:52.069225 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-q9lpx after 0 failed attempt(s)\\\\nI0129 08:41:52.069218 6510 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-z82hk in node crc\\\\nI0129 08:41:52.069018 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069267 6510 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0129 08:41:52.069273 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0129 08:41:52.069277 6510 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069230 6510 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-q9lpx\\\\nI0129 08:41:52.069064 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.842281 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.857123 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.871452 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.871532 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.871544 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.871563 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.871574 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:16Z","lastTransitionTime":"2026-01-29T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.872451 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.888824 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.903200 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.918651 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:16Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.974328 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.974375 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.974433 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.974450 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:16 crc kubenswrapper[4895]: I0129 08:42:16.974463 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:16Z","lastTransitionTime":"2026-01-29T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.077715 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.077753 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.077762 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.077779 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.077792 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:17Z","lastTransitionTime":"2026-01-29T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.180815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.180863 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.180872 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.180889 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.180900 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:17Z","lastTransitionTime":"2026-01-29T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.208198 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 11:09:41.683628142 +0000 UTC Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.210813 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:17 crc kubenswrapper[4895]: E0129 08:42:17.211168 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.284204 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.284270 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.284290 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.284311 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.284323 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:17Z","lastTransitionTime":"2026-01-29T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.305809 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.305855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.305865 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.305882 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.305894 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:17Z","lastTransitionTime":"2026-01-29T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:17 crc kubenswrapper[4895]: E0129 08:42:17.319768 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.325069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.325117 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.325128 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.325147 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.325159 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:17Z","lastTransitionTime":"2026-01-29T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:17 crc kubenswrapper[4895]: E0129 08:42:17.342331 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.346322 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.346373 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.346386 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.346404 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.346415 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:17Z","lastTransitionTime":"2026-01-29T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:17 crc kubenswrapper[4895]: E0129 08:42:17.361585 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.366464 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.366508 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.366518 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.366539 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.366551 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:17Z","lastTransitionTime":"2026-01-29T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:17 crc kubenswrapper[4895]: E0129 08:42:17.380988 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.384878 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.384962 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.384976 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.384998 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.385010 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:17Z","lastTransitionTime":"2026-01-29T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:17 crc kubenswrapper[4895]: E0129 08:42:17.399116 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:17Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:17 crc kubenswrapper[4895]: E0129 08:42:17.399290 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.401185 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.401233 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.401247 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.401276 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.401292 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:17Z","lastTransitionTime":"2026-01-29T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.504496 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.504567 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.504577 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.504597 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.504607 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:17Z","lastTransitionTime":"2026-01-29T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.607490 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.607543 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.607553 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.607571 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.607585 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:17Z","lastTransitionTime":"2026-01-29T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.709540 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.709587 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.709597 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.709617 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.709628 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:17Z","lastTransitionTime":"2026-01-29T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.812794 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.812847 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.812862 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.812882 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.812896 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:17Z","lastTransitionTime":"2026-01-29T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.915476 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.915533 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.915550 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.915572 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:17 crc kubenswrapper[4895]: I0129 08:42:17.915586 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:17Z","lastTransitionTime":"2026-01-29T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.018584 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.018643 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.018657 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.018681 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.018700 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:18Z","lastTransitionTime":"2026-01-29T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.121404 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.121473 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.121516 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.121540 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.121552 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:18Z","lastTransitionTime":"2026-01-29T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.209173 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:01:59.10398936 +0000 UTC Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.210501 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.210520 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.210673 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:18 crc kubenswrapper[4895]: E0129 08:42:18.210769 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:18 crc kubenswrapper[4895]: E0129 08:42:18.210949 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:18 crc kubenswrapper[4895]: E0129 08:42:18.211223 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.225231 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.225288 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.225297 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.225314 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.225323 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:18Z","lastTransitionTime":"2026-01-29T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.329350 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.329422 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.329436 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.329458 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.329473 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:18Z","lastTransitionTime":"2026-01-29T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.431982 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.432020 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.432030 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.432048 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.432058 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:18Z","lastTransitionTime":"2026-01-29T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.535424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.535474 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.535486 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.535503 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.535514 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:18Z","lastTransitionTime":"2026-01-29T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.639690 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.639757 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.639770 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.639795 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.639809 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:18Z","lastTransitionTime":"2026-01-29T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.743899 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.744008 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.744025 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.744051 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.744072 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:18Z","lastTransitionTime":"2026-01-29T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.846991 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.847039 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.847048 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.847067 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.847083 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:18Z","lastTransitionTime":"2026-01-29T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.950019 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.950067 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.950077 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.950096 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:18 crc kubenswrapper[4895]: I0129 08:42:18.950107 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:18Z","lastTransitionTime":"2026-01-29T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.052681 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.052738 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.052752 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.052770 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.052782 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:19Z","lastTransitionTime":"2026-01-29T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.155093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.155149 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.155162 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.155183 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.155198 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:19Z","lastTransitionTime":"2026-01-29T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.210413 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 23:04:20.561727496 +0000 UTC Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.210464 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:19 crc kubenswrapper[4895]: E0129 08:42:19.210726 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.230888 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:52Z\\\",\\\"message\\\":\\\"08:41:52.069211 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-z82hk\\\\nI0129 08:41:52.069212 6510 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-q9lpx in node crc\\\\nI0129 08:41:52.069225 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-q9lpx after 0 failed attempt(s)\\\\nI0129 08:41:52.069218 6510 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-z82hk in node crc\\\\nI0129 08:41:52.069018 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069267 6510 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0129 08:41:52.069273 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0129 08:41:52.069277 6510 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069230 6510 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-q9lpx\\\\nI0129 08:41:52.069064 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.244728 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.260288 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.260345 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.260358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.260381 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.260395 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:19Z","lastTransitionTime":"2026-01-29T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.261275 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.272611 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.286600 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.300867 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.317444 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.329734 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.345193 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.361673 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.362444 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.362493 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.362505 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.362523 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.362536 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:19Z","lastTransitionTime":"2026-01-29T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.374039 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.387823 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.399966 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ce52dc-df05-45bb-9f9c-e0512bbf8b4b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c069173f445cfb105a3b28f49ce9bf7793ee448913a03b8b6d1ddbef91daee04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9854afcfb65e0bf01d43a3c9135ec03936b53449edd9a024f857fea2ab013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd7a8365fe1196984119afbb2bd6f61289bb5396b77f733275c1e275519b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.413212 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.429825 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://115f19a42723357b8ac63e4f01e8d591ea064cf2757203925018f142ddbf1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:42:14Z\\\",\\\"message\\\":\\\"2026-01-29T08:41:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0fc7b64e-5a5d-4e42-96c6-a5b15855b9aa\\\\n2026-01-29T08:41:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0fc7b64e-5a5d-4e42-96c6-a5b15855b9aa to /host/opt/cni/bin/\\\\n2026-01-29T08:41:29Z [verbose] multus-daemon started\\\\n2026-01-29T08:41:29Z [verbose] Readiness Indicator file check\\\\n2026-01-29T08:42:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.441059 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.456751 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:19Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.466227 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.466297 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.466309 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.466329 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.466343 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:19Z","lastTransitionTime":"2026-01-29T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.569218 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.569279 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.569293 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.569314 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.569327 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:19Z","lastTransitionTime":"2026-01-29T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.672222 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.672277 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.672290 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.672310 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.672321 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:19Z","lastTransitionTime":"2026-01-29T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.775037 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.775082 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.775094 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.775119 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.775134 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:19Z","lastTransitionTime":"2026-01-29T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.878803 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.878848 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.878860 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.878878 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.878960 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:19Z","lastTransitionTime":"2026-01-29T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.981740 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.981802 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.981812 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.981835 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:19 crc kubenswrapper[4895]: I0129 08:42:19.981845 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:19Z","lastTransitionTime":"2026-01-29T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.085763 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.085834 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.085846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.085864 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.085875 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:20Z","lastTransitionTime":"2026-01-29T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.189523 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.189581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.189594 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.189617 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.189634 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:20Z","lastTransitionTime":"2026-01-29T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.210984 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:42:29.75169328 +0000 UTC Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.211173 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.211200 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.211172 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:20 crc kubenswrapper[4895]: E0129 08:42:20.211336 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:20 crc kubenswrapper[4895]: E0129 08:42:20.211451 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:20 crc kubenswrapper[4895]: E0129 08:42:20.211645 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.293415 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.293485 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.293504 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.293526 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.293539 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:20Z","lastTransitionTime":"2026-01-29T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.396900 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.396988 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.397005 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.397025 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.397036 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:20Z","lastTransitionTime":"2026-01-29T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.500768 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.500853 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.500873 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.500904 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.500979 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:20Z","lastTransitionTime":"2026-01-29T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.603987 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.604055 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.604068 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.604090 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.604105 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:20Z","lastTransitionTime":"2026-01-29T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.706811 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.706864 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.706882 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.706900 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.706943 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:20Z","lastTransitionTime":"2026-01-29T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.809756 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.809801 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.809814 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.809833 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.809846 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:20Z","lastTransitionTime":"2026-01-29T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.912829 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.912887 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.912900 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.912945 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:20 crc kubenswrapper[4895]: I0129 08:42:20.912960 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:20Z","lastTransitionTime":"2026-01-29T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.015420 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.015474 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.015491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.015513 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.015525 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:21Z","lastTransitionTime":"2026-01-29T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.118637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.118901 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.118936 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.118959 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.118976 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:21Z","lastTransitionTime":"2026-01-29T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.211216 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 13:08:33.204499935 +0000 UTC Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.211473 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:21 crc kubenswrapper[4895]: E0129 08:42:21.211649 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.221355 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.221401 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.221413 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.221429 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.221440 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:21Z","lastTransitionTime":"2026-01-29T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.324217 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.324294 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.324308 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.324336 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.324351 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:21Z","lastTransitionTime":"2026-01-29T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.427407 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.427457 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.427473 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.427495 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.427512 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:21Z","lastTransitionTime":"2026-01-29T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.530487 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.530527 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.530535 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.530554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.530565 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:21Z","lastTransitionTime":"2026-01-29T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.633686 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.633747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.633759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.633779 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.633792 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:21Z","lastTransitionTime":"2026-01-29T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.736936 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.737009 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.737026 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.737050 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.737067 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:21Z","lastTransitionTime":"2026-01-29T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.841424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.841475 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.841491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.841520 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.841537 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:21Z","lastTransitionTime":"2026-01-29T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.944899 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.944961 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.944974 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.944996 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:21 crc kubenswrapper[4895]: I0129 08:42:21.945008 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:21Z","lastTransitionTime":"2026-01-29T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.047966 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.048012 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.048023 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.048041 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.048051 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:22Z","lastTransitionTime":"2026-01-29T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.150905 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.150972 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.150982 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.151001 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.151011 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:22Z","lastTransitionTime":"2026-01-29T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.211271 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.211441 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:07:35.724661665 +0000 UTC Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.211564 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:22 crc kubenswrapper[4895]: E0129 08:42:22.211654 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.211590 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:22 crc kubenswrapper[4895]: E0129 08:42:22.211784 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.211906 4895 scope.go:117] "RemoveContainer" containerID="7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3" Jan 29 08:42:22 crc kubenswrapper[4895]: E0129 08:42:22.211938 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.253955 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.253994 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.254004 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.254022 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.254060 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:22Z","lastTransitionTime":"2026-01-29T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.359190 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.359711 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.359725 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.359747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.359761 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:22Z","lastTransitionTime":"2026-01-29T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.462546 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.462609 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.462623 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.462644 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.462656 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:22Z","lastTransitionTime":"2026-01-29T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.565194 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.565260 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.565275 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.565294 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.565306 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:22Z","lastTransitionTime":"2026-01-29T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.667779 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.667833 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.667849 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.667870 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.667882 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:22Z","lastTransitionTime":"2026-01-29T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.667984 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/2.log" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.672667 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerStarted","Data":"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639"} Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.673138 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.692614 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.710421 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ce52dc-df05-45bb-9f9c-e0512bbf8b4b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c069173f445cfb105a3b28f49ce9bf7793ee448913a03b8b6d1ddbef91daee04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9854afcfb65e0bf01d43a3c9135ec03936b53449edd9a024f857fea2ab013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd7a8365fe1196984119afbb2bd6f61289bb5396b77f733275c1e275519b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.728893 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.746656 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://115f19a42723357b8ac63e4f01e8d591ea064cf2757203925018f142ddbf1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:42:14Z\\\",\\\"message\\\":\\\"2026-01-29T08:41:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0fc7b64e-5a5d-4e42-96c6-a5b15855b9aa\\\\n2026-01-29T08:41:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0fc7b64e-5a5d-4e42-96c6-a5b15855b9aa to /host/opt/cni/bin/\\\\n2026-01-29T08:41:29Z [verbose] multus-daemon started\\\\n2026-01-29T08:41:29Z [verbose] Readiness Indicator file check\\\\n2026-01-29T08:42:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.756802 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.770618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.770679 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.770693 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.770715 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.770731 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:22Z","lastTransitionTime":"2026-01-29T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.773353 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.796482 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:52Z\\\",\\\"message\\\":\\\"08:41:52.069211 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-z82hk\\\\nI0129 08:41:52.069212 6510 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-q9lpx in node crc\\\\nI0129 08:41:52.069225 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-q9lpx after 0 failed attempt(s)\\\\nI0129 08:41:52.069218 6510 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-z82hk in node crc\\\\nI0129 08:41:52.069018 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069267 6510 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0129 08:41:52.069273 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0129 08:41:52.069277 6510 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069230 6510 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-q9lpx\\\\nI0129 08:41:52.069064 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.810160 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.829216 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.846855 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.869190 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.873276 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.873333 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.873346 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.873365 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.873377 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:22Z","lastTransitionTime":"2026-01-29T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.889789 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.903956 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.920578 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.937727 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.960429 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.976054 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.976099 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.976112 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.976128 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.976138 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:22Z","lastTransitionTime":"2026-01-29T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:22 crc kubenswrapper[4895]: I0129 08:42:22.977429 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:22Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.079889 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.079966 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.079976 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.079998 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.080008 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:23Z","lastTransitionTime":"2026-01-29T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.183509 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.183554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.183564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.183582 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.183592 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:23Z","lastTransitionTime":"2026-01-29T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.210999 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:23 crc kubenswrapper[4895]: E0129 08:42:23.211176 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.211808 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:29:54.569526576 +0000 UTC Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.287334 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.287387 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.287398 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.287421 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.287432 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:23Z","lastTransitionTime":"2026-01-29T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.390296 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.390492 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.390564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.390674 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.390748 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:23Z","lastTransitionTime":"2026-01-29T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.494253 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.494321 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.494338 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.494360 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.494375 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:23Z","lastTransitionTime":"2026-01-29T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.597772 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.597844 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.597857 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.597877 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.597891 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:23Z","lastTransitionTime":"2026-01-29T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.680794 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/3.log" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.682326 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/2.log" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.687865 4895 generic.go:334] "Generic (PLEG): container finished" podID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerID="38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639" exitCode=1 Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.687972 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerDied","Data":"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639"} Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.688080 4895 scope.go:117] "RemoveContainer" containerID="7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.688755 4895 scope.go:117] "RemoveContainer" containerID="38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639" Jan 29 08:42:23 crc kubenswrapper[4895]: E0129 08:42:23.689010 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.702948 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.703202 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.703236 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.703247 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.703264 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.703275 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:23Z","lastTransitionTime":"2026-01-29T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.717380 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.733881 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.752580 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb9c5ba2c78d425ab7170dddb1db5baf6b6c96aa389326f8ad35e6283e689a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:41:52Z\\\",\\\"message\\\":\\\"08:41:52.069211 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-z82hk\\\\nI0129 08:41:52.069212 6510 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-q9lpx in node crc\\\\nI0129 08:41:52.069225 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-q9lpx after 0 failed attempt(s)\\\\nI0129 08:41:52.069218 6510 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-z82hk in node crc\\\\nI0129 08:41:52.069018 6510 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069267 6510 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0129 08:41:52.069273 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0129 08:41:52.069277 6510 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0129 08:41:52.069230 6510 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-q9lpx\\\\nI0129 08:41:52.069064 6510 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:42:23Z\\\",\\\"message\\\":\\\"4 6949 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0129 08:42:23.339797 6949 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0129 08:42:23.339802 6949 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0129 08:42:23.339827 6949 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF0129 08:42:23.339827 6949 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.766802 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.781198 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.797694 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.805716 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.805777 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.805790 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.805808 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.805821 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:23Z","lastTransitionTime":"2026-01-29T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.811540 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.823454 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.837337 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.849023 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.866703 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.882040 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.897191 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ce52dc-df05-45bb-9f9c-e0512bbf8b4b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c069173f445cfb105a3b28f49ce9bf7793ee448913a03b8b6d1ddbef91daee04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9854afcfb65e0bf01d43a3c9135ec03936b53449edd9a024f857fea2ab013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd7a8365fe1196984119afbb2bd6f61289bb5396b77f733275c1e275519b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.908689 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.908718 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.908727 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.908740 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.908750 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:23Z","lastTransitionTime":"2026-01-29T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.913586 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.929157 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://115f19a42723357b8ac63e4f01e8d591ea064cf2757203925018f142ddbf1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:42:14Z\\\",\\\"message\\\":\\\"2026-01-29T08:41:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0fc7b64e-5a5d-4e42-96c6-a5b15855b9aa\\\\n2026-01-29T08:41:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0fc7b64e-5a5d-4e42-96c6-a5b15855b9aa to /host/opt/cni/bin/\\\\n2026-01-29T08:41:29Z [verbose] multus-daemon started\\\\n2026-01-29T08:41:29Z [verbose] Readiness Indicator file check\\\\n2026-01-29T08:42:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:23 crc kubenswrapper[4895]: I0129 08:42:23.941735 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:23Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.011262 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.011303 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.011315 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.011331 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.011340 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:24Z","lastTransitionTime":"2026-01-29T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.027040 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.027181 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.027206 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:28.027172062 +0000 UTC m=+149.668680388 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.027331 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.027384 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.027489 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.027501 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:43:28.02748069 +0000 UTC m=+149.668988836 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.027582 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 08:43:28.027563032 +0000 UTC m=+149.669071388 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.114469 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.114524 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.114534 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.114554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.114565 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:24Z","lastTransitionTime":"2026-01-29T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.128019 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.128074 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.128211 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.128226 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.128231 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.128279 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.128295 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.128241 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.128385 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 08:43:28.128366928 +0000 UTC m=+149.769875074 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.128414 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 08:43:28.128400469 +0000 UTC m=+149.769908615 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.210764 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.210824 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.210864 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.210960 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.211095 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.211317 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.212800 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 08:12:39.569555455 +0000 UTC Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.216859 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.216890 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.216901 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.216938 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.216949 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:24Z","lastTransitionTime":"2026-01-29T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.319546 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.319601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.319613 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.319631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.319643 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:24Z","lastTransitionTime":"2026-01-29T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.422120 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.422228 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.422239 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.422257 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.422271 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:24Z","lastTransitionTime":"2026-01-29T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.525359 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.525412 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.525425 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.525444 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.525461 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:24Z","lastTransitionTime":"2026-01-29T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.628490 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.628561 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.628574 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.628606 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.628642 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:24Z","lastTransitionTime":"2026-01-29T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.693015 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/3.log" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.696377 4895 scope.go:117] "RemoveContainer" containerID="38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639" Jan 29 08:42:24 crc kubenswrapper[4895]: E0129 08:42:24.696569 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.712743 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.725733 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.730522 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.730560 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.730572 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.730590 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.730602 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:24Z","lastTransitionTime":"2026-01-29T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.739235 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ce52dc-df05-45bb-9f9c-e0512bbf8b4b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c069173f445cfb105a3b28f49ce9bf7793ee448913a03b8b6d1ddbef91daee04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9854afcfb65e0bf01d43a3c9135ec03936b53449edd9a024f857fea2ab013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd7a8365fe1196984119afbb2bd6f61289bb5396b77f733275c1e275519b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.756498 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.771065 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://115f19a42723357b8ac63e4f01e8d591ea064cf2757203925018f142ddbf1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:42:14Z\\\",\\\"message\\\":\\\"2026-01-29T08:41:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0fc7b64e-5a5d-4e42-96c6-a5b15855b9aa\\\\n2026-01-29T08:41:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0fc7b64e-5a5d-4e42-96c6-a5b15855b9aa to /host/opt/cni/bin/\\\\n2026-01-29T08:41:29Z [verbose] multus-daemon started\\\\n2026-01-29T08:41:29Z [verbose] Readiness Indicator file check\\\\n2026-01-29T08:42:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.781552 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.794947 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.810981 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.831710 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:42:23Z\\\",\\\"message\\\":\\\"4 6949 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0129 08:42:23.339797 6949 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0129 08:42:23.339802 6949 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0129 08:42:23.339827 6949 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF0129 08:42:23.339827 6949 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:42:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.833669 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.833707 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.833718 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.833734 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.833746 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:24Z","lastTransitionTime":"2026-01-29T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.851113 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.865375 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.880210 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.896589 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.911349 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.921973 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.934454 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.936393 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.936436 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.936450 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.936469 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.936484 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:24Z","lastTransitionTime":"2026-01-29T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:24 crc kubenswrapper[4895]: I0129 08:42:24.950004 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:24Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.039106 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.039151 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.039161 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.039180 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.039199 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:25Z","lastTransitionTime":"2026-01-29T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.142598 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.142666 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.142681 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.142705 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.142722 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:25Z","lastTransitionTime":"2026-01-29T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.211256 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:25 crc kubenswrapper[4895]: E0129 08:42:25.211729 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.213328 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:13:34.616379733 +0000 UTC Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.246524 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.246605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.246634 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.246672 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.246699 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:25Z","lastTransitionTime":"2026-01-29T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.349223 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.349282 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.349305 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.349330 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.349353 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:25Z","lastTransitionTime":"2026-01-29T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.453498 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.453564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.453578 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.453601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.453621 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:25Z","lastTransitionTime":"2026-01-29T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.556495 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.556532 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.556541 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.556560 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.556571 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:25Z","lastTransitionTime":"2026-01-29T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.658758 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.658809 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.658819 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.658837 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.658847 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:25Z","lastTransitionTime":"2026-01-29T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.762440 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.762484 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.762494 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.762513 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.762525 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:25Z","lastTransitionTime":"2026-01-29T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.865636 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.865678 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.865688 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.865703 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.865715 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:25Z","lastTransitionTime":"2026-01-29T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.968818 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.968863 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.968875 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.968893 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:25 crc kubenswrapper[4895]: I0129 08:42:25.968907 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:25Z","lastTransitionTime":"2026-01-29T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.071770 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.071811 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.071819 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.071833 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.071843 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:26Z","lastTransitionTime":"2026-01-29T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.174417 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.174453 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.174462 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.174477 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.174487 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:26Z","lastTransitionTime":"2026-01-29T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.210679 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.210730 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.210689 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:26 crc kubenswrapper[4895]: E0129 08:42:26.210864 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:26 crc kubenswrapper[4895]: E0129 08:42:26.210992 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:26 crc kubenswrapper[4895]: E0129 08:42:26.211096 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.213683 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:21:59.289045531 +0000 UTC Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.277739 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.277785 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.277796 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.277813 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.277822 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:26Z","lastTransitionTime":"2026-01-29T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.381013 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.381060 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.381069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.381087 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.381097 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:26Z","lastTransitionTime":"2026-01-29T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.483984 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.484039 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.484051 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.484075 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.484091 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:26Z","lastTransitionTime":"2026-01-29T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.586361 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.586404 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.586413 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.586429 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.586439 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:26Z","lastTransitionTime":"2026-01-29T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.689231 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.689297 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.689308 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.689331 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.689345 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:26Z","lastTransitionTime":"2026-01-29T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.791681 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.791736 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.791749 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.791768 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.791779 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:26Z","lastTransitionTime":"2026-01-29T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.895023 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.895162 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.895190 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.895222 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.895249 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:26Z","lastTransitionTime":"2026-01-29T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.998744 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.998799 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.998811 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.998833 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:26 crc kubenswrapper[4895]: I0129 08:42:26.998846 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:26Z","lastTransitionTime":"2026-01-29T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.101281 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.101351 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.101362 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.101386 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.101397 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:27Z","lastTransitionTime":"2026-01-29T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.204615 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.204683 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.204698 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.204721 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.204734 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:27Z","lastTransitionTime":"2026-01-29T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.211080 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:27 crc kubenswrapper[4895]: E0129 08:42:27.211622 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.213948 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:09:16.01908938 +0000 UTC Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.229748 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.308092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.308161 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.308175 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.308198 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.308211 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:27Z","lastTransitionTime":"2026-01-29T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.412304 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.412388 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.412413 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.412446 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.412473 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:27Z","lastTransitionTime":"2026-01-29T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.515407 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.515444 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.515453 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.515469 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.515478 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:27Z","lastTransitionTime":"2026-01-29T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.560419 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.560451 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.560460 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.560478 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.560488 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:27Z","lastTransitionTime":"2026-01-29T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:27 crc kubenswrapper[4895]: E0129 08:42:27.574595 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.579240 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.579293 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.579306 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.579327 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.579339 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:27Z","lastTransitionTime":"2026-01-29T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:27 crc kubenswrapper[4895]: E0129 08:42:27.595022 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.599184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.599227 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.599240 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.599259 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.599270 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:27Z","lastTransitionTime":"2026-01-29T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:27 crc kubenswrapper[4895]: E0129 08:42:27.611790 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.616702 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.616779 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.616794 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.616817 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.616831 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:27Z","lastTransitionTime":"2026-01-29T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:27 crc kubenswrapper[4895]: E0129 08:42:27.630094 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.634524 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.634564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.634575 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.634594 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.634607 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:27Z","lastTransitionTime":"2026-01-29T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:27 crc kubenswrapper[4895]: E0129 08:42:27.647743 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5dc976ab-cc38-4fc1-8149-00132186b0b4\\\",\\\"systemUUID\\\":\\\"1999941b-7422-4452-a2a1-4823b90b5d59\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:27Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:27 crc kubenswrapper[4895]: E0129 08:42:27.647871 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.649362 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.649408 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.649422 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.649442 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.649457 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:27Z","lastTransitionTime":"2026-01-29T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.752152 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.752227 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.752241 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.752259 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.752273 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:27Z","lastTransitionTime":"2026-01-29T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.895106 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.895157 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.895167 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.895185 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.895195 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:27Z","lastTransitionTime":"2026-01-29T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.997995 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.998050 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.998061 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.998081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:27 crc kubenswrapper[4895]: I0129 08:42:27.998093 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:27Z","lastTransitionTime":"2026-01-29T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.101019 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.101072 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.101084 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.101102 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.101117 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:28Z","lastTransitionTime":"2026-01-29T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.203883 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.203944 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.203955 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.203972 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.203991 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:28Z","lastTransitionTime":"2026-01-29T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.210629 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.210646 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.210642 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:28 crc kubenswrapper[4895]: E0129 08:42:28.210910 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:28 crc kubenswrapper[4895]: E0129 08:42:28.210786 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:28 crc kubenswrapper[4895]: E0129 08:42:28.211015 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.214733 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:07:56.891252546 +0000 UTC Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.307619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.307677 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.307687 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.307705 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.307722 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:28Z","lastTransitionTime":"2026-01-29T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.411244 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.411288 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.411301 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.411318 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.411331 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:28Z","lastTransitionTime":"2026-01-29T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.513826 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.513877 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.513888 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.513906 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.513940 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:28Z","lastTransitionTime":"2026-01-29T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.616970 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.617027 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.617045 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.617070 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.617085 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:28Z","lastTransitionTime":"2026-01-29T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.719745 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.719817 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.719834 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.719859 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.719878 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:28Z","lastTransitionTime":"2026-01-29T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.824003 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.824075 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.824092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.824116 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.824130 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:28Z","lastTransitionTime":"2026-01-29T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.926908 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.926999 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.927013 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.927034 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:28 crc kubenswrapper[4895]: I0129 08:42:28.927047 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:28Z","lastTransitionTime":"2026-01-29T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.030391 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.030431 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.030439 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.030456 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.030478 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:29Z","lastTransitionTime":"2026-01-29T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.133427 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.133473 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.133481 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.133495 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.133506 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:29Z","lastTransitionTime":"2026-01-29T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.210610 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:29 crc kubenswrapper[4895]: E0129 08:42:29.210768 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.216038 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:30:19.620832934 +0000 UTC Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.237615 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.237704 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.237718 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.237739 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.237754 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:29Z","lastTransitionTime":"2026-01-29T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.238132 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585625afd42032cf7ca6b3c6699ce086338d5fb247b01c0b4469051bf506271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77878804f11aad1da1bfe5846d08d7646fd3c19c18939c3dd52a8abb1994ab4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.256304 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.274375 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.291029 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.307488 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wfxqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da81d90f-1b31-410e-8de7-2f5d25b99a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18b2031893a5c8f820b5c207da3dacd23848ee6c3aae03f3004483aea2b1b4f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wfxqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.323239 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g4585" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8cxd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g4585\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.337481 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64077907-2e9a-4524-8f9c-e8b788244392\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04125ac8de345b07dd928aff1b21f178375092d856ceddd053c5df653eec03b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc33256e233e38fc4f3c2e8ebb9d6efcbc6eb510d805f7aee528bbe9e93db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2cc33256e233e38fc4f3c2e8ebb9d6efcbc6eb510d805f7aee528bbe9e93db9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.340475 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.340518 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.340527 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.340546 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.340557 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:29Z","lastTransitionTime":"2026-01-29T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.350956 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"474926ed-2673-4f4d-b872-3072054ba68e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 08:41:13.405821 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 08:41:13.409053 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3530001905/tls.crt::/tmp/serving-cert-3530001905/tls.key\\\\\\\"\\\\nI0129 08:41:19.148492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 08:41:19.182523 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 08:41:19.186968 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 08:41:19.187082 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 08:41:19.187119 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 08:41:19.206983 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 08:41:19.207030 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207037 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 08:41:19.207041 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 08:41:19.207044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 08:41:19.207047 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 08:41:19.207050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 08:41:19.207435 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 08:41:19.210852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.363857 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94bcb4f-cad0-4cd2-b0e7-b0ba05e4ffe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde3100d9891e83e2d7524630dd3aead4c7d87a803e0b88fe736e74075e96596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90982edd28be9c76568950693fc1eeeb786b086b368f9cd98828b89da456f402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e5ce97e81ddd733a1adcd538140c1adb56aafd4850739053d425d7d856ffc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.377465 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ce52dc-df05-45bb-9f9c-e0512bbf8b4b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c069173f445cfb105a3b28f49ce9bf7793ee448913a03b8b6d1ddbef91daee04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9854afcfb65e0bf01d43a3c9135ec03936b53449edd9a024f857fea2ab013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd7a8365fe1196984119afbb2bd6f61289bb5396b77f733275c1e275519b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38caba319297d8ccbc56aa22218cb464f6bb6d7b4dbf3064e1a21c2c3fc6c777\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:40:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.392816 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b387c6240372413cf63ed4a8ef2fbcac73a8c8d07edc8d72fe2480ca666444f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.405526 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b4dgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ba7dcf-e7a0-4408-983b-09a07851d01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://115f19a42723357b8ac63e4f01e8d591ea064cf2757203925018f142ddbf1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:42:14Z\\\",\\\"message\\\":\\\"2026-01-29T08:41:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0fc7b64e-5a5d-4e42-96c6-a5b15855b9aa\\\\n2026-01-29T08:41:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0fc7b64e-5a5d-4e42-96c6-a5b15855b9aa to /host/opt/cni/bin/\\\\n2026-01-29T08:41:29Z [verbose] multus-daemon started\\\\n2026-01-29T08:41:29Z [verbose] Readiness Indicator file check\\\\n2026-01-29T08:42:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vn4w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b4dgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.415249 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9lpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc592fc-c35e-4480-9cb1-2f7d122f05bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://863cc178a7e9d523f46d171a4130f3aa320ec79589a8adbbe23704447e7ee6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwz92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9lpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.427895 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://900e140f43696755d9160720c181e76eca371b4e88c4d23bf7626f1511d3762b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.440142 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4a4bd95-f02a-4617-9aa4-febfa6bee92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15440bcdf10c9c4355b29258f9de1800885c28ab9d91e9adb129d231098a27cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9dx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z82hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.442936 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.442980 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.442990 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.443005 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.443019 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:29Z","lastTransitionTime":"2026-01-29T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.458617 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be953ef9-0feb-4327-ba58-0e29287bab39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bf21d7e143d7e7a59a98899dc801cadda043a0b49450f9c220028427fd0fd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbf4e1a314e0cb85cb3a55caeb2448505fa489b2ce78d72a28b531ad4fa03ed8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ba4faff66d0ad803ad2d5303f65cffcee2ecd318dde0f2152a94fa8967735f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://371021fc4453fec158029bb23b54f9ff3f22732df6fcccd81273406fb622e200\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b177faa8d9bcf1cd111d54862ae49f1d6f5a7832b2e6fd53c180077c0af7cb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d97d2b4d0e851377f0599ef45610a9b626c6d883db77d24a7966c15d70d8d545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://220413a59501c5a96cbdaa14c9a243008c37e85a612e36ba6d0ccc8e60b55749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rkk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.479974 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7621f3ab-b09c-4a23-8031-645d96fe5c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T08:42:23Z\\\",\\\"message\\\":\\\"4 6949 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0129 08:42:23.339797 6949 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0129 08:42:23.339802 6949 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0129 08:42:23.339827 6949 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF0129 08:42:23.339827 6949 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T08:42:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T08:41:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T08:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnjb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4zc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.494139 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd357565-d91f-44af-bc41-befbeb672385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T08:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac4294f456e7ca9fffa804a33f9f84550443779e53329179472b442a14bac9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee62f73cca5e2bfdd470a4588d7e049a51be64fae5112e303145bc00c01ad5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gltqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T08:41:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w8kmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T08:42:29Z is after 2025-08-24T17:21:41Z" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.545812 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.545852 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.545859 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.545873 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.545886 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:29Z","lastTransitionTime":"2026-01-29T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.648650 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.648712 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.648724 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.648743 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.648757 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:29Z","lastTransitionTime":"2026-01-29T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.751334 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.751394 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.751412 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.751433 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.751451 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:29Z","lastTransitionTime":"2026-01-29T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.854760 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.854857 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.854878 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.854900 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.854932 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:29Z","lastTransitionTime":"2026-01-29T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.957608 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.957665 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.957678 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.957696 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:29 crc kubenswrapper[4895]: I0129 08:42:29.957709 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:29Z","lastTransitionTime":"2026-01-29T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.060289 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.060331 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.060343 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.060364 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.060377 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:30Z","lastTransitionTime":"2026-01-29T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.164019 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.164103 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.164124 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.164152 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.164172 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:30Z","lastTransitionTime":"2026-01-29T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.210985 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.211578 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.211841 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:30 crc kubenswrapper[4895]: E0129 08:42:30.211904 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:30 crc kubenswrapper[4895]: E0129 08:42:30.212176 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:30 crc kubenswrapper[4895]: E0129 08:42:30.215270 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.216309 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 02:23:25.709947301 +0000 UTC Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.267396 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.267466 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.267488 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.267520 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.267541 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:30Z","lastTransitionTime":"2026-01-29T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.370999 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.371071 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.371092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.371120 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.371141 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:30Z","lastTransitionTime":"2026-01-29T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.474444 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.474512 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.474525 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.474548 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.474560 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:30Z","lastTransitionTime":"2026-01-29T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.577649 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.577700 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.577712 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.577730 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.577744 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:30Z","lastTransitionTime":"2026-01-29T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.681158 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.681193 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.681222 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.681238 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.681248 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:30Z","lastTransitionTime":"2026-01-29T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.784559 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.784613 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.784624 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.784641 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.784650 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:30Z","lastTransitionTime":"2026-01-29T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.888068 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.888123 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.888136 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.888156 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.888168 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:30Z","lastTransitionTime":"2026-01-29T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.991145 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.991201 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.991214 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.991239 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:30 crc kubenswrapper[4895]: I0129 08:42:30.991251 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:30Z","lastTransitionTime":"2026-01-29T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.099899 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.099959 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.099970 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.099991 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.100008 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:31Z","lastTransitionTime":"2026-01-29T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.202993 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.203298 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.203383 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.203473 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.203551 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:31Z","lastTransitionTime":"2026-01-29T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.210553 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:31 crc kubenswrapper[4895]: E0129 08:42:31.210734 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.216707 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:51:23.861967964 +0000 UTC Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.306530 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.306584 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.306597 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.306619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.306632 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:31Z","lastTransitionTime":"2026-01-29T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.408941 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.409003 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.409015 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.409035 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.409049 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:31Z","lastTransitionTime":"2026-01-29T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.511653 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.511704 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.511714 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.511733 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.511747 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:31Z","lastTransitionTime":"2026-01-29T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.614584 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.614643 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.614666 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.614694 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.614711 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:31Z","lastTransitionTime":"2026-01-29T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.717953 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.718005 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.718016 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.718038 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.718052 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:31Z","lastTransitionTime":"2026-01-29T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.820633 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.820714 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.820726 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.820756 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.820769 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:31Z","lastTransitionTime":"2026-01-29T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.925476 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.925545 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.925559 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.925580 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:31 crc kubenswrapper[4895]: I0129 08:42:31.925597 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:31Z","lastTransitionTime":"2026-01-29T08:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.028185 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.028242 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.028253 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.028269 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.028284 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:32Z","lastTransitionTime":"2026-01-29T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.131539 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.131586 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.131603 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.131621 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.131631 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:32Z","lastTransitionTime":"2026-01-29T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.210601 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:32 crc kubenswrapper[4895]: E0129 08:42:32.210750 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.211006 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:32 crc kubenswrapper[4895]: E0129 08:42:32.211059 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.211243 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:32 crc kubenswrapper[4895]: E0129 08:42:32.211443 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.217684 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:01:23.562652507 +0000 UTC Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.235771 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.236320 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.236334 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.236357 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.236372 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:32Z","lastTransitionTime":"2026-01-29T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.339763 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.339823 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.339837 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.339855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.339866 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:32Z","lastTransitionTime":"2026-01-29T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.443104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.443154 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.443167 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.443187 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.443201 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:32Z","lastTransitionTime":"2026-01-29T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.545640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.545670 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.545678 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.545694 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.545708 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:32Z","lastTransitionTime":"2026-01-29T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.649810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.649884 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.649898 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.649946 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.649963 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:32Z","lastTransitionTime":"2026-01-29T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.758866 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.758998 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.759015 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.759064 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.759078 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:32Z","lastTransitionTime":"2026-01-29T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.861138 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.861175 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.861186 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.861201 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.861211 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:32Z","lastTransitionTime":"2026-01-29T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.963668 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.963716 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.963727 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.963745 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:32 crc kubenswrapper[4895]: I0129 08:42:32.963757 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:32Z","lastTransitionTime":"2026-01-29T08:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.066476 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.066536 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.066550 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.066570 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.066586 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:33Z","lastTransitionTime":"2026-01-29T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.169793 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.169839 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.169851 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.169870 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.169883 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:33Z","lastTransitionTime":"2026-01-29T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.210666 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:33 crc kubenswrapper[4895]: E0129 08:42:33.210852 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.218038 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 11:29:10.436831758 +0000 UTC Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.273100 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.273165 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.273176 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.273196 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.273208 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:33Z","lastTransitionTime":"2026-01-29T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.376178 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.376245 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.376262 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.376295 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.376333 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:33Z","lastTransitionTime":"2026-01-29T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.479468 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.479544 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.479572 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.479609 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.479634 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:33Z","lastTransitionTime":"2026-01-29T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.583337 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.583402 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.583423 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.583449 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.583467 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:33Z","lastTransitionTime":"2026-01-29T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.687633 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.687695 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.687711 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.687735 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.687750 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:33Z","lastTransitionTime":"2026-01-29T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.790546 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.790598 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.790628 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.790648 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.790658 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:33Z","lastTransitionTime":"2026-01-29T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.893390 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.893445 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.893456 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.893474 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.893485 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:33Z","lastTransitionTime":"2026-01-29T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.995895 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.996000 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.996019 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.996046 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:33 crc kubenswrapper[4895]: I0129 08:42:33.996066 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:33Z","lastTransitionTime":"2026-01-29T08:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.099655 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.099742 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.099755 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.099781 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.099796 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:34Z","lastTransitionTime":"2026-01-29T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.203860 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.203950 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.203966 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.203992 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.204008 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:34Z","lastTransitionTime":"2026-01-29T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.210214 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.210310 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.210214 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:34 crc kubenswrapper[4895]: E0129 08:42:34.210381 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:34 crc kubenswrapper[4895]: E0129 08:42:34.210437 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:34 crc kubenswrapper[4895]: E0129 08:42:34.210514 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.218464 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:59:28.331569415 +0000 UTC Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.307473 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.307534 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.307549 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.307573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.307590 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:34Z","lastTransitionTime":"2026-01-29T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.410202 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.410263 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.410275 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.410296 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.410310 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:34Z","lastTransitionTime":"2026-01-29T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.513735 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.513789 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.513800 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.513818 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.513831 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:34Z","lastTransitionTime":"2026-01-29T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.616791 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.616844 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.616853 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.616873 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.616886 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:34Z","lastTransitionTime":"2026-01-29T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.719594 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.719673 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.719686 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.719708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.719724 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:34Z","lastTransitionTime":"2026-01-29T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.823025 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.823099 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.823114 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.823131 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.823160 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:34Z","lastTransitionTime":"2026-01-29T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.926461 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.926527 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.926541 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.926564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:34 crc kubenswrapper[4895]: I0129 08:42:34.926580 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:34Z","lastTransitionTime":"2026-01-29T08:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.028898 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.028996 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.029010 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.029038 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.029091 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:35Z","lastTransitionTime":"2026-01-29T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.132099 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.132187 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.132222 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.132256 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.132274 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:35Z","lastTransitionTime":"2026-01-29T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.210391 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:35 crc kubenswrapper[4895]: E0129 08:42:35.210782 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.218910 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:55:55.576853109 +0000 UTC Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.235328 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.235374 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.235391 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.235412 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.235429 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:35Z","lastTransitionTime":"2026-01-29T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.338110 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.338150 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.338162 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.338181 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.338193 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:35Z","lastTransitionTime":"2026-01-29T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.440634 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.440682 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.440693 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.440709 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.440721 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:35Z","lastTransitionTime":"2026-01-29T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.543402 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.543479 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.543498 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.543570 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.543590 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:35Z","lastTransitionTime":"2026-01-29T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.646830 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.646887 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.646896 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.646936 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.646948 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:35Z","lastTransitionTime":"2026-01-29T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.751103 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.751156 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.751172 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.751224 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.751243 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:35Z","lastTransitionTime":"2026-01-29T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.854260 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.854294 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.854302 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.854317 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.854326 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:35Z","lastTransitionTime":"2026-01-29T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.957105 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.957158 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.957172 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.957194 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:35 crc kubenswrapper[4895]: I0129 08:42:35.957206 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:35Z","lastTransitionTime":"2026-01-29T08:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.060719 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.060784 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.060795 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.060813 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.060826 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:36Z","lastTransitionTime":"2026-01-29T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.163210 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.163260 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.163273 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.163291 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.163305 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:36Z","lastTransitionTime":"2026-01-29T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.211126 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.211184 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.211151 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:36 crc kubenswrapper[4895]: E0129 08:42:36.211299 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:36 crc kubenswrapper[4895]: E0129 08:42:36.211370 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:36 crc kubenswrapper[4895]: E0129 08:42:36.211473 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.219489 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:01:22.039616757 +0000 UTC Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.266520 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.266576 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.266589 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.266613 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.266630 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:36Z","lastTransitionTime":"2026-01-29T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.369585 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.369627 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.369637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.369653 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.369665 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:36Z","lastTransitionTime":"2026-01-29T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.472084 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.472155 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.472172 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.472197 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.472212 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:36Z","lastTransitionTime":"2026-01-29T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.574583 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.574624 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.574633 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.574647 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.574659 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:36Z","lastTransitionTime":"2026-01-29T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.677536 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.677605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.677630 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.677663 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.677684 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:36Z","lastTransitionTime":"2026-01-29T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.780704 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.780778 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.780799 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.780826 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.780843 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:36Z","lastTransitionTime":"2026-01-29T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.883394 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.883478 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.883491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.883508 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.883536 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:36Z","lastTransitionTime":"2026-01-29T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.987104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.987181 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.987202 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.987229 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:36 crc kubenswrapper[4895]: I0129 08:42:36.987248 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:36Z","lastTransitionTime":"2026-01-29T08:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.090714 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.090784 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.090800 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.090822 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.090836 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:37Z","lastTransitionTime":"2026-01-29T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.194009 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.194066 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.194081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.194099 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.194112 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:37Z","lastTransitionTime":"2026-01-29T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.210527 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:37 crc kubenswrapper[4895]: E0129 08:42:37.210739 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.211550 4895 scope.go:117] "RemoveContainer" containerID="38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639" Jan 29 08:42:37 crc kubenswrapper[4895]: E0129 08:42:37.211775 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.220407 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:35:21.457006804 +0000 UTC Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.297017 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.297075 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.297090 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.297114 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.297131 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:37Z","lastTransitionTime":"2026-01-29T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.400543 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.400597 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.400608 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.400631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.400642 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:37Z","lastTransitionTime":"2026-01-29T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.503932 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.503981 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.504009 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.504032 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.504044 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:37Z","lastTransitionTime":"2026-01-29T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.606767 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.606822 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.606834 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.606851 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.606861 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:37Z","lastTransitionTime":"2026-01-29T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.710390 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.710452 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.710463 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.710482 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.710494 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:37Z","lastTransitionTime":"2026-01-29T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.815229 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.815278 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.815289 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.815306 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.815323 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:37Z","lastTransitionTime":"2026-01-29T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.917760 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.917841 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.917852 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.917868 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.917877 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:37Z","lastTransitionTime":"2026-01-29T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.934906 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.935000 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.935014 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.935032 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 08:42:37 crc kubenswrapper[4895]: I0129 08:42:37.935050 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T08:42:37Z","lastTransitionTime":"2026-01-29T08:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.002870 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn"] Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.003503 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.005503 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.006297 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.006458 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.007699 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.045898 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w8kmk" podStartSLOduration=73.045846271 podStartE2EDuration="1m13.045846271s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:38.045749259 +0000 UTC m=+99.687257405" watchObservedRunningTime="2026-01-29 08:42:38.045846271 +0000 UTC m=+99.687354417" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.074561 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podStartSLOduration=74.07445541 podStartE2EDuration="1m14.07445541s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:38.073628511 +0000 UTC m=+99.715136667" watchObservedRunningTime="2026-01-29 08:42:38.07445541 +0000 UTC m=+99.715963556" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.092680 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7j8rs" podStartSLOduration=74.092653826 podStartE2EDuration="1m14.092653826s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:38.09200048 +0000 UTC m=+99.733508636" watchObservedRunningTime="2026-01-29 08:42:38.092653826 +0000 UTC m=+99.734161972" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.102281 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddee0f70-21ca-4302-b33c-4f7c38855cc1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.102371 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ddee0f70-21ca-4302-b33c-4f7c38855cc1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.102399 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddee0f70-21ca-4302-b33c-4f7c38855cc1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.102425 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ddee0f70-21ca-4302-b33c-4f7c38855cc1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.102458 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ddee0f70-21ca-4302-b33c-4f7c38855cc1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.203124 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ddee0f70-21ca-4302-b33c-4f7c38855cc1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.203479 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddee0f70-21ca-4302-b33c-4f7c38855cc1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.203646 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ddee0f70-21ca-4302-b33c-4f7c38855cc1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.203746 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ddee0f70-21ca-4302-b33c-4f7c38855cc1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.203286 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ddee0f70-21ca-4302-b33c-4f7c38855cc1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.204021 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ddee0f70-21ca-4302-b33c-4f7c38855cc1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.204175 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddee0f70-21ca-4302-b33c-4f7c38855cc1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.204876 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ddee0f70-21ca-4302-b33c-4f7c38855cc1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.211211 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:38 crc kubenswrapper[4895]: E0129 08:42:38.211376 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.211245 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.211645 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:38 crc kubenswrapper[4895]: E0129 08:42:38.212032 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:38 crc kubenswrapper[4895]: E0129 08:42:38.211946 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.219552 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddee0f70-21ca-4302-b33c-4f7c38855cc1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.222004 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:33:39.220200452 +0000 UTC Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.222074 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.235887 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddee0f70-21ca-4302-b33c-4f7c38855cc1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wphmn\" (UID: \"ddee0f70-21ca-4302-b33c-4f7c38855cc1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.246887 4895 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.251304 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wfxqf" podStartSLOduration=74.251272616 podStartE2EDuration="1m14.251272616s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:38.25106901 +0000 UTC m=+99.892577156" watchObservedRunningTime="2026-01-29 08:42:38.251272616 +0000 UTC m=+99.892780782" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.297798 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.297775923 podStartE2EDuration="48.297775923s" podCreationTimestamp="2026-01-29 08:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:38.29642762 +0000 UTC m=+99.937935766" watchObservedRunningTime="2026-01-29 08:42:38.297775923 +0000 UTC m=+99.939284069" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.298600 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.298590303 podStartE2EDuration="1m19.298590303s" podCreationTimestamp="2026-01-29 08:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:38.271195153 +0000 UTC m=+99.912703309" watchObservedRunningTime="2026-01-29 08:42:38.298590303 +0000 UTC m=+99.940098449" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.320321 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" Jan 29 08:42:38 crc kubenswrapper[4895]: W0129 08:42:38.340434 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddee0f70_21ca_4302_b33c_4f7c38855cc1.slice/crio-f540730d57d2839289981ea35e11c5c9ba3272f70d74bf1ed9e1ddf69c047500 WatchSource:0}: Error finding container f540730d57d2839289981ea35e11c5c9ba3272f70d74bf1ed9e1ddf69c047500: Status 404 returned error can't find the container with id f540730d57d2839289981ea35e11c5c9ba3272f70d74bf1ed9e1ddf69c047500 Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.344908 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-b4dgj" podStartSLOduration=74.344888525 podStartE2EDuration="1m14.344888525s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:38.344544137 +0000 UTC m=+99.986052283" watchObservedRunningTime="2026-01-29 08:42:38.344888525 +0000 UTC m=+99.986396671" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.360384 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-q9lpx" podStartSLOduration=74.360355403 podStartE2EDuration="1m14.360355403s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:38.360283691 +0000 UTC m=+100.001791837" watchObservedRunningTime="2026-01-29 08:42:38.360355403 +0000 UTC m=+100.001863549" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.376238 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=11.376215672 podStartE2EDuration="11.376215672s" podCreationTimestamp="2026-01-29 08:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:38.374011637 +0000 UTC m=+100.015519783" watchObservedRunningTime="2026-01-29 08:42:38.376215672 +0000 UTC m=+100.017723818" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.397332 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.397303898 podStartE2EDuration="1m19.397303898s" podCreationTimestamp="2026-01-29 08:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:38.396545788 +0000 UTC m=+100.038053954" watchObservedRunningTime="2026-01-29 08:42:38.397303898 +0000 UTC m=+100.038812044" Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.742912 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" event={"ID":"ddee0f70-21ca-4302-b33c-4f7c38855cc1","Type":"ContainerStarted","Data":"f73f0d31b3fd9c71af96f7a80e7f0f64ff932bf7bc55c9730d3a34ecae2c9728"} Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.743024 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" event={"ID":"ddee0f70-21ca-4302-b33c-4f7c38855cc1","Type":"ContainerStarted","Data":"f540730d57d2839289981ea35e11c5c9ba3272f70d74bf1ed9e1ddf69c047500"} Jan 29 08:42:38 crc kubenswrapper[4895]: I0129 08:42:38.760293 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wphmn" podStartSLOduration=74.760252285 podStartE2EDuration="1m14.760252285s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:38.759125137 +0000 UTC m=+100.400633303" watchObservedRunningTime="2026-01-29 08:42:38.760252285 +0000 UTC m=+100.401760471" Jan 29 08:42:39 crc kubenswrapper[4895]: I0129 08:42:39.211271 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:39 crc kubenswrapper[4895]: E0129 08:42:39.212171 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:40 crc kubenswrapper[4895]: I0129 08:42:40.210515 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:40 crc kubenswrapper[4895]: I0129 08:42:40.210626 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:40 crc kubenswrapper[4895]: I0129 08:42:40.210727 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:40 crc kubenswrapper[4895]: E0129 08:42:40.210967 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:40 crc kubenswrapper[4895]: E0129 08:42:40.211187 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:40 crc kubenswrapper[4895]: E0129 08:42:40.211334 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:41 crc kubenswrapper[4895]: I0129 08:42:41.211148 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:41 crc kubenswrapper[4895]: E0129 08:42:41.211387 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:42 crc kubenswrapper[4895]: I0129 08:42:42.434641 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:42 crc kubenswrapper[4895]: E0129 08:42:42.434797 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:42 crc kubenswrapper[4895]: I0129 08:42:42.435818 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:42 crc kubenswrapper[4895]: I0129 08:42:42.435889 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:42 crc kubenswrapper[4895]: E0129 08:42:42.436061 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:42 crc kubenswrapper[4895]: I0129 08:42:42.436098 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:42 crc kubenswrapper[4895]: E0129 08:42:42.436625 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:42 crc kubenswrapper[4895]: E0129 08:42:42.437580 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:42 crc kubenswrapper[4895]: I0129 08:42:42.457634 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 08:42:43 crc kubenswrapper[4895]: I0129 08:42:43.750602 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs\") pod \"network-metrics-daemon-g4585\" (UID: \"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\") " pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:43 crc kubenswrapper[4895]: E0129 08:42:43.750769 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:42:43 crc kubenswrapper[4895]: E0129 08:42:43.750845 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs podName:d167bf78-4ea9-42d8-8ab6-6aaf234e102e nodeName:}" failed. No retries permitted until 2026-01-29 08:43:47.75082846 +0000 UTC m=+169.392336606 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs") pod "network-metrics-daemon-g4585" (UID: "d167bf78-4ea9-42d8-8ab6-6aaf234e102e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 08:42:44 crc kubenswrapper[4895]: I0129 08:42:44.211247 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:44 crc kubenswrapper[4895]: I0129 08:42:44.211298 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:44 crc kubenswrapper[4895]: E0129 08:42:44.211539 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:44 crc kubenswrapper[4895]: I0129 08:42:44.211812 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:44 crc kubenswrapper[4895]: E0129 08:42:44.211872 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:44 crc kubenswrapper[4895]: I0129 08:42:44.211988 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:44 crc kubenswrapper[4895]: E0129 08:42:44.212040 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:44 crc kubenswrapper[4895]: E0129 08:42:44.212212 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:46 crc kubenswrapper[4895]: I0129 08:42:46.211202 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:46 crc kubenswrapper[4895]: I0129 08:42:46.211246 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:46 crc kubenswrapper[4895]: I0129 08:42:46.211275 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:46 crc kubenswrapper[4895]: I0129 08:42:46.211195 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:46 crc kubenswrapper[4895]: E0129 08:42:46.211370 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:46 crc kubenswrapper[4895]: E0129 08:42:46.211556 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:46 crc kubenswrapper[4895]: E0129 08:42:46.211632 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:46 crc kubenswrapper[4895]: E0129 08:42:46.211708 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:48 crc kubenswrapper[4895]: I0129 08:42:48.210583 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:48 crc kubenswrapper[4895]: E0129 08:42:48.210753 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:48 crc kubenswrapper[4895]: I0129 08:42:48.211137 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:48 crc kubenswrapper[4895]: E0129 08:42:48.211207 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:48 crc kubenswrapper[4895]: I0129 08:42:48.211220 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:48 crc kubenswrapper[4895]: I0129 08:42:48.211331 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:48 crc kubenswrapper[4895]: E0129 08:42:48.211425 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:48 crc kubenswrapper[4895]: E0129 08:42:48.211523 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:49 crc kubenswrapper[4895]: I0129 08:42:49.213310 4895 scope.go:117] "RemoveContainer" containerID="38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639" Jan 29 08:42:49 crc kubenswrapper[4895]: E0129 08:42:49.213475 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" Jan 29 08:42:49 crc kubenswrapper[4895]: I0129 08:42:49.240352 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=7.240330461 podStartE2EDuration="7.240330461s" podCreationTimestamp="2026-01-29 08:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:42:49.238710681 +0000 UTC m=+110.880218858" watchObservedRunningTime="2026-01-29 08:42:49.240330461 +0000 UTC m=+110.881838607" Jan 29 08:42:50 crc kubenswrapper[4895]: I0129 08:42:50.210304 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:50 crc kubenswrapper[4895]: I0129 08:42:50.210389 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:50 crc kubenswrapper[4895]: I0129 08:42:50.210330 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:50 crc kubenswrapper[4895]: I0129 08:42:50.210299 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:50 crc kubenswrapper[4895]: E0129 08:42:50.210483 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:50 crc kubenswrapper[4895]: E0129 08:42:50.210592 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:50 crc kubenswrapper[4895]: E0129 08:42:50.210687 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:50 crc kubenswrapper[4895]: E0129 08:42:50.210735 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:52 crc kubenswrapper[4895]: I0129 08:42:52.210472 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:52 crc kubenswrapper[4895]: I0129 08:42:52.210518 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:52 crc kubenswrapper[4895]: E0129 08:42:52.210664 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:52 crc kubenswrapper[4895]: I0129 08:42:52.210697 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:52 crc kubenswrapper[4895]: I0129 08:42:52.210680 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:52 crc kubenswrapper[4895]: E0129 08:42:52.211053 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:52 crc kubenswrapper[4895]: E0129 08:42:52.211179 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:52 crc kubenswrapper[4895]: E0129 08:42:52.211215 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:54 crc kubenswrapper[4895]: I0129 08:42:54.210703 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:54 crc kubenswrapper[4895]: I0129 08:42:54.210784 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:54 crc kubenswrapper[4895]: I0129 08:42:54.210784 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:54 crc kubenswrapper[4895]: I0129 08:42:54.210740 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:54 crc kubenswrapper[4895]: E0129 08:42:54.211050 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:54 crc kubenswrapper[4895]: E0129 08:42:54.211203 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:54 crc kubenswrapper[4895]: E0129 08:42:54.211269 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:54 crc kubenswrapper[4895]: E0129 08:42:54.211363 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:56 crc kubenswrapper[4895]: I0129 08:42:56.210709 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:56 crc kubenswrapper[4895]: I0129 08:42:56.210742 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:56 crc kubenswrapper[4895]: E0129 08:42:56.210935 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:56 crc kubenswrapper[4895]: I0129 08:42:56.210742 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:56 crc kubenswrapper[4895]: I0129 08:42:56.210743 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:56 crc kubenswrapper[4895]: E0129 08:42:56.211072 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:56 crc kubenswrapper[4895]: E0129 08:42:56.211301 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:56 crc kubenswrapper[4895]: E0129 08:42:56.211334 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:58 crc kubenswrapper[4895]: I0129 08:42:58.210859 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:42:58 crc kubenswrapper[4895]: I0129 08:42:58.210894 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:42:58 crc kubenswrapper[4895]: I0129 08:42:58.210860 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:42:58 crc kubenswrapper[4895]: E0129 08:42:58.211017 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:42:58 crc kubenswrapper[4895]: E0129 08:42:58.211169 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:42:58 crc kubenswrapper[4895]: E0129 08:42:58.211319 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:42:58 crc kubenswrapper[4895]: I0129 08:42:58.210884 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:42:58 crc kubenswrapper[4895]: E0129 08:42:58.211853 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:42:59 crc kubenswrapper[4895]: E0129 08:42:59.233659 4895 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 08:42:59 crc kubenswrapper[4895]: E0129 08:42:59.312757 4895 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 08:43:00 crc kubenswrapper[4895]: I0129 08:43:00.210312 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:00 crc kubenswrapper[4895]: I0129 08:43:00.210399 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:00 crc kubenswrapper[4895]: I0129 08:43:00.210335 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:43:00 crc kubenswrapper[4895]: I0129 08:43:00.210360 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:00 crc kubenswrapper[4895]: E0129 08:43:00.210545 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:43:00 crc kubenswrapper[4895]: E0129 08:43:00.210694 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:43:00 crc kubenswrapper[4895]: E0129 08:43:00.210743 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:43:00 crc kubenswrapper[4895]: E0129 08:43:00.210879 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:43:01 crc kubenswrapper[4895]: I0129 08:43:01.211627 4895 scope.go:117] "RemoveContainer" containerID="38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639" Jan 29 08:43:01 crc kubenswrapper[4895]: E0129 08:43:01.211852 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4zc4_openshift-ovn-kubernetes(7621f3ab-b09c-4a23-8031-645d96fe5c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" Jan 29 08:43:01 crc kubenswrapper[4895]: I0129 08:43:01.825757 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b4dgj_69ba7dcf-e7a0-4408-983b-09a07851d01c/kube-multus/1.log" Jan 29 08:43:01 crc kubenswrapper[4895]: I0129 08:43:01.826709 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b4dgj_69ba7dcf-e7a0-4408-983b-09a07851d01c/kube-multus/0.log" Jan 29 08:43:01 crc kubenswrapper[4895]: I0129 08:43:01.826778 4895 generic.go:334] "Generic (PLEG): container finished" podID="69ba7dcf-e7a0-4408-983b-09a07851d01c" containerID="115f19a42723357b8ac63e4f01e8d591ea064cf2757203925018f142ddbf1336" exitCode=1 Jan 29 08:43:01 crc kubenswrapper[4895]: I0129 08:43:01.826823 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b4dgj" event={"ID":"69ba7dcf-e7a0-4408-983b-09a07851d01c","Type":"ContainerDied","Data":"115f19a42723357b8ac63e4f01e8d591ea064cf2757203925018f142ddbf1336"} Jan 29 08:43:01 crc kubenswrapper[4895]: I0129 08:43:01.826892 4895 scope.go:117] "RemoveContainer" containerID="1f28738125418dedb487713a22aa71c3ab0a8c39a56399d850a2bc3b8d493f9e" Jan 29 08:43:01 crc kubenswrapper[4895]: I0129 08:43:01.827325 4895 scope.go:117] "RemoveContainer" containerID="115f19a42723357b8ac63e4f01e8d591ea064cf2757203925018f142ddbf1336" Jan 29 08:43:01 crc kubenswrapper[4895]: E0129 08:43:01.827628 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-b4dgj_openshift-multus(69ba7dcf-e7a0-4408-983b-09a07851d01c)\"" pod="openshift-multus/multus-b4dgj" podUID="69ba7dcf-e7a0-4408-983b-09a07851d01c" Jan 29 08:43:02 crc kubenswrapper[4895]: I0129 08:43:02.211215 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:02 crc kubenswrapper[4895]: I0129 08:43:02.211315 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:43:02 crc kubenswrapper[4895]: E0129 08:43:02.211388 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:43:02 crc kubenswrapper[4895]: E0129 08:43:02.211480 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:43:02 crc kubenswrapper[4895]: I0129 08:43:02.211234 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:02 crc kubenswrapper[4895]: E0129 08:43:02.211590 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:43:02 crc kubenswrapper[4895]: I0129 08:43:02.211211 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:02 crc kubenswrapper[4895]: E0129 08:43:02.211649 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:43:02 crc kubenswrapper[4895]: I0129 08:43:02.831779 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b4dgj_69ba7dcf-e7a0-4408-983b-09a07851d01c/kube-multus/1.log" Jan 29 08:43:04 crc kubenswrapper[4895]: I0129 08:43:04.211106 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:04 crc kubenswrapper[4895]: I0129 08:43:04.211514 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:43:04 crc kubenswrapper[4895]: E0129 08:43:04.211574 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:43:04 crc kubenswrapper[4895]: I0129 08:43:04.211107 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:04 crc kubenswrapper[4895]: I0129 08:43:04.211399 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:04 crc kubenswrapper[4895]: E0129 08:43:04.211704 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:43:04 crc kubenswrapper[4895]: E0129 08:43:04.211832 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:43:04 crc kubenswrapper[4895]: E0129 08:43:04.211964 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:43:04 crc kubenswrapper[4895]: E0129 08:43:04.314127 4895 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 08:43:06 crc kubenswrapper[4895]: I0129 08:43:06.210783 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:43:06 crc kubenswrapper[4895]: I0129 08:43:06.210844 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:06 crc kubenswrapper[4895]: I0129 08:43:06.210852 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:06 crc kubenswrapper[4895]: I0129 08:43:06.210808 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:06 crc kubenswrapper[4895]: E0129 08:43:06.211023 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:43:06 crc kubenswrapper[4895]: E0129 08:43:06.211285 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:43:06 crc kubenswrapper[4895]: E0129 08:43:06.211494 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:43:06 crc kubenswrapper[4895]: E0129 08:43:06.211569 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:43:08 crc kubenswrapper[4895]: I0129 08:43:08.210244 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:08 crc kubenswrapper[4895]: E0129 08:43:08.210406 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:43:08 crc kubenswrapper[4895]: I0129 08:43:08.210627 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:08 crc kubenswrapper[4895]: E0129 08:43:08.210692 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:43:08 crc kubenswrapper[4895]: I0129 08:43:08.210812 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:43:08 crc kubenswrapper[4895]: E0129 08:43:08.210887 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:43:08 crc kubenswrapper[4895]: I0129 08:43:08.211038 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:08 crc kubenswrapper[4895]: E0129 08:43:08.212546 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:43:09 crc kubenswrapper[4895]: E0129 08:43:09.314669 4895 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 08:43:10 crc kubenswrapper[4895]: I0129 08:43:10.210869 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:43:10 crc kubenswrapper[4895]: I0129 08:43:10.210883 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:10 crc kubenswrapper[4895]: I0129 08:43:10.211048 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:10 crc kubenswrapper[4895]: I0129 08:43:10.211104 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:10 crc kubenswrapper[4895]: E0129 08:43:10.211200 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:43:10 crc kubenswrapper[4895]: E0129 08:43:10.211753 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:43:10 crc kubenswrapper[4895]: E0129 08:43:10.211833 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:43:10 crc kubenswrapper[4895]: E0129 08:43:10.211889 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:43:12 crc kubenswrapper[4895]: I0129 08:43:12.211014 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:12 crc kubenswrapper[4895]: I0129 08:43:12.211066 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:43:12 crc kubenswrapper[4895]: E0129 08:43:12.211240 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:43:12 crc kubenswrapper[4895]: E0129 08:43:12.211343 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:43:12 crc kubenswrapper[4895]: I0129 08:43:12.211884 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:12 crc kubenswrapper[4895]: E0129 08:43:12.212185 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:43:12 crc kubenswrapper[4895]: I0129 08:43:12.211838 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:12 crc kubenswrapper[4895]: E0129 08:43:12.212467 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:43:14 crc kubenswrapper[4895]: I0129 08:43:14.210624 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:43:14 crc kubenswrapper[4895]: I0129 08:43:14.210705 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:14 crc kubenswrapper[4895]: I0129 08:43:14.210706 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:14 crc kubenswrapper[4895]: E0129 08:43:14.210816 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:43:14 crc kubenswrapper[4895]: I0129 08:43:14.210968 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:14 crc kubenswrapper[4895]: E0129 08:43:14.211046 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:43:14 crc kubenswrapper[4895]: E0129 08:43:14.211227 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:43:14 crc kubenswrapper[4895]: E0129 08:43:14.211357 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:43:14 crc kubenswrapper[4895]: E0129 08:43:14.316026 4895 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 08:43:15 crc kubenswrapper[4895]: I0129 08:43:15.211971 4895 scope.go:117] "RemoveContainer" containerID="115f19a42723357b8ac63e4f01e8d591ea064cf2757203925018f142ddbf1336" Jan 29 08:43:15 crc kubenswrapper[4895]: I0129 08:43:15.212073 4895 scope.go:117] "RemoveContainer" containerID="38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639" Jan 29 08:43:16 crc kubenswrapper[4895]: I0129 08:43:15.877732 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b4dgj_69ba7dcf-e7a0-4408-983b-09a07851d01c/kube-multus/1.log" Jan 29 08:43:16 crc kubenswrapper[4895]: I0129 08:43:15.878279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b4dgj" event={"ID":"69ba7dcf-e7a0-4408-983b-09a07851d01c","Type":"ContainerStarted","Data":"f3b3757319019a832c3ca6eb585f42b40f9081e3da1c5e9129ed33a83bcbd323"} Jan 29 08:43:16 crc kubenswrapper[4895]: I0129 08:43:15.881396 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/3.log" Jan 29 08:43:16 crc kubenswrapper[4895]: I0129 08:43:15.886449 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerStarted","Data":"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80"} Jan 29 08:43:16 crc kubenswrapper[4895]: I0129 08:43:15.887361 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:43:16 crc kubenswrapper[4895]: I0129 08:43:16.238330 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:16 crc kubenswrapper[4895]: I0129 08:43:16.238394 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:16 crc kubenswrapper[4895]: I0129 08:43:16.238443 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:43:16 crc kubenswrapper[4895]: E0129 08:43:16.238495 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:43:16 crc kubenswrapper[4895]: I0129 08:43:16.238461 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:16 crc kubenswrapper[4895]: E0129 08:43:16.238651 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:43:16 crc kubenswrapper[4895]: E0129 08:43:16.238723 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:43:16 crc kubenswrapper[4895]: E0129 08:43:16.238788 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:43:16 crc kubenswrapper[4895]: I0129 08:43:16.908138 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podStartSLOduration=111.908108568 podStartE2EDuration="1m51.908108568s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:16.171892759 +0000 UTC m=+137.813400905" watchObservedRunningTime="2026-01-29 08:43:16.908108568 +0000 UTC m=+138.549616714" Jan 29 08:43:16 crc kubenswrapper[4895]: I0129 08:43:16.908494 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-g4585"] Jan 29 08:43:16 crc kubenswrapper[4895]: I0129 08:43:16.908610 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:16 crc kubenswrapper[4895]: E0129 08:43:16.908729 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:43:18 crc kubenswrapper[4895]: I0129 08:43:18.210356 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:18 crc kubenswrapper[4895]: I0129 08:43:18.210371 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:18 crc kubenswrapper[4895]: E0129 08:43:18.210891 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g4585" podUID="d167bf78-4ea9-42d8-8ab6-6aaf234e102e" Jan 29 08:43:18 crc kubenswrapper[4895]: I0129 08:43:18.210427 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:18 crc kubenswrapper[4895]: E0129 08:43:18.210976 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 08:43:18 crc kubenswrapper[4895]: I0129 08:43:18.210414 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:43:18 crc kubenswrapper[4895]: E0129 08:43:18.211031 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 08:43:18 crc kubenswrapper[4895]: E0129 08:43:18.211096 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 08:43:20 crc kubenswrapper[4895]: I0129 08:43:20.210599 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:20 crc kubenswrapper[4895]: I0129 08:43:20.210666 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:43:20 crc kubenswrapper[4895]: I0129 08:43:20.210724 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:20 crc kubenswrapper[4895]: I0129 08:43:20.210629 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:20 crc kubenswrapper[4895]: I0129 08:43:20.214005 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 08:43:20 crc kubenswrapper[4895]: I0129 08:43:20.214205 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 08:43:20 crc kubenswrapper[4895]: I0129 08:43:20.214289 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 08:43:20 crc kubenswrapper[4895]: I0129 08:43:20.214614 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 08:43:20 crc kubenswrapper[4895]: I0129 08:43:20.214998 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 08:43:20 crc kubenswrapper[4895]: I0129 08:43:20.215086 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 08:43:21 crc kubenswrapper[4895]: I0129 08:43:21.144062 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.120936 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.121096 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:28 crc kubenswrapper[4895]: E0129 08:43:28.121197 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:45:30.121158266 +0000 UTC m=+271.762666412 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.121369 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.122327 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.128023 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.222992 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.223536 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.226543 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.226651 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.329852 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.337686 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.354280 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 08:43:28 crc kubenswrapper[4895]: W0129 08:43:28.744457 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-f45373c19f5108127ab7c0c592ba7b73847544e4eb2baeb3a94a451e2b98d879 WatchSource:0}: Error finding container f45373c19f5108127ab7c0c592ba7b73847544e4eb2baeb3a94a451e2b98d879: Status 404 returned error can't find the container with id f45373c19f5108127ab7c0c592ba7b73847544e4eb2baeb3a94a451e2b98d879 Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.930982 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"efede819a3875fcb359f88d1259b7b75bde54b85de1f7a10ebf971bb7bcbb2ae"} Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.931058 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"476452e530c80c87cfd0aee95a00ba807fc4785c1c039f873336fee6707abb9d"} Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.932383 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1ba5eee8117430e1b862df21b04f3834bc93df664c22fbe49a16fe55233540fb"} Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.932487 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f45373c19f5108127ab7c0c592ba7b73847544e4eb2baeb3a94a451e2b98d879"} Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.933681 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"edc1845b7b76c2d37dee18eae5e43cf1a5b6eec1e9bc96381f45b6830418d4ae"} Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.933709 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"57352d538d4ed768fb6811295db2b99dd31014a11ecb7e41e45f6b5d6af0876e"} Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.934061 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:28 crc kubenswrapper[4895]: I0129 08:43:28.964679 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.024851 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-klns8"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.025819 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.027689 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6696n"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.028352 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.028890 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.029381 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.029873 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.030003 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.030505 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.031375 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.031542 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.031549 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.031833 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.033196 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.033665 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.034472 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.034866 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.035041 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.034867 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.035345 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.035980 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.037480 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.037670 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.038035 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.038223 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.038441 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.039859 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.040028 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.040217 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.040602 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.040666 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.040761 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.040794 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.040843 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.041191 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.041349 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.041570 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.042870 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.043280 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.043491 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.043742 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.043770 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rhk9v"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.044412 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.047945 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.048286 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.048476 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.048875 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.051533 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-klns8"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.051588 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xc6q5"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.052166 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.062256 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.062281 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.062554 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.062799 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.063256 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.063442 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.063944 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.064279 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.064493 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.064654 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bkbvc"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.065377 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.065542 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.065683 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.065820 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.065875 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.066273 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-w8vqq"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.066451 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.066764 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-w8vqq" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.066889 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.067284 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.067304 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.067669 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.075992 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6696n"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.076665 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gc8mc"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.081139 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.085431 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.085710 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.085436 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.086181 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.086891 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.087809 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.088307 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.088713 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gc8mc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.089753 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.091298 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.091489 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.093185 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.093877 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.094265 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.094462 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.094904 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.120931 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.121744 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.122380 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.123696 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.126893 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.127185 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.127413 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.127576 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.127782 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.128785 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.129218 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.130064 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132525 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-etcd-serving-ca\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132574 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-config\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132605 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rxfq\" (UniqueName: \"kubernetes.io/projected/0bfa8be2-ad8a-4253-9406-feb1dbd01a00-kube-api-access-5rxfq\") pod \"console-operator-58897d9998-bkbvc\" (UID: \"0bfa8be2-ad8a-4253-9406-feb1dbd01a00\") " pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132628 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-trusted-ca-bundle\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132650 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef82cf5f-56e2-4e0e-9a7f-674337086996-serving-cert\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132675 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5203d54b-a735-4118-bae0-7554299a98cf-images\") pod \"machine-api-operator-5694c8668f-xc6q5\" (UID: \"5203d54b-a735-4118-bae0-7554299a98cf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132701 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-client-ca\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132721 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0b82afa3-8f94-41e2-828e-4debc9e73088-node-pullsecrets\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132742 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ede88132-a555-4d42-a520-524081bcfcf8-service-ca-bundle\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132763 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ef82cf5f-56e2-4e0e-9a7f-674337086996-audit-policies\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132787 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwst6\" (UniqueName: \"kubernetes.io/projected/ede88132-a555-4d42-a520-524081bcfcf8-kube-api-access-bwst6\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132811 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132837 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508-auth-proxy-config\") pod \"machine-approver-56656f9798-zdksh\" (UID: \"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132860 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b82afa3-8f94-41e2-828e-4debc9e73088-encryption-config\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132886 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5203d54b-a735-4118-bae0-7554299a98cf-config\") pod \"machine-api-operator-5694c8668f-xc6q5\" (UID: \"5203d54b-a735-4118-bae0-7554299a98cf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132912 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508-config\") pod \"machine-approver-56656f9798-zdksh\" (UID: \"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132974 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26ddacfd-315a-46a3-a9a1-7149df69ef84-client-ca\") pod \"route-controller-manager-6576b87f9c-bc7pv\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.132997 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508-machine-approver-tls\") pod \"machine-approver-56656f9798-zdksh\" (UID: \"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133037 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnwcx\" (UniqueName: \"kubernetes.io/projected/0e8ec468-a940-452a-975b-60a761b9f44f-kube-api-access-lnwcx\") pod \"downloads-7954f5f757-w8vqq\" (UID: \"0e8ec468-a940-452a-975b-60a761b9f44f\") " pod="openshift-console/downloads-7954f5f757-w8vqq" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133059 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede88132-a555-4d42-a520-524081bcfcf8-config\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133079 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ef82cf5f-56e2-4e0e-9a7f-674337086996-etcd-client\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133101 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ef82cf5f-56e2-4e0e-9a7f-674337086996-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133121 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-serving-cert\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133147 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bfa8be2-ad8a-4253-9406-feb1dbd01a00-serving-cert\") pod \"console-operator-58897d9998-bkbvc\" (UID: \"0bfa8be2-ad8a-4253-9406-feb1dbd01a00\") " pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133173 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv2qw\" (UniqueName: \"kubernetes.io/projected/0b82afa3-8f94-41e2-828e-4debc9e73088-kube-api-access-zv2qw\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133196 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/025c284f-6fab-4bf3-8fba-63f663c2e621-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n8q46\" (UID: \"025c284f-6fab-4bf3-8fba-63f663c2e621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133223 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ede88132-a555-4d42-a520-524081bcfcf8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133249 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b82afa3-8f94-41e2-828e-4debc9e73088-audit-dir\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133273 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26ddacfd-315a-46a3-a9a1-7149df69ef84-config\") pod \"route-controller-manager-6576b87f9c-bc7pv\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133299 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5203d54b-a735-4118-bae0-7554299a98cf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xc6q5\" (UID: \"5203d54b-a735-4118-bae0-7554299a98cf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133332 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b82afa3-8f94-41e2-828e-4debc9e73088-serving-cert\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133357 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ef82cf5f-56e2-4e0e-9a7f-674337086996-encryption-config\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133379 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bfa8be2-ad8a-4253-9406-feb1dbd01a00-config\") pod \"console-operator-58897d9998-bkbvc\" (UID: \"0bfa8be2-ad8a-4253-9406-feb1dbd01a00\") " pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133406 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zjlg\" (UniqueName: \"kubernetes.io/projected/26ddacfd-315a-46a3-a9a1-7149df69ef84-kube-api-access-5zjlg\") pod \"route-controller-manager-6576b87f9c-bc7pv\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133432 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b82afa3-8f94-41e2-828e-4debc9e73088-etcd-client\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133455 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-audit\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133479 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26ddacfd-315a-46a3-a9a1-7149df69ef84-serving-cert\") pod \"route-controller-manager-6576b87f9c-bc7pv\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133505 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vdp8\" (UniqueName: \"kubernetes.io/projected/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-kube-api-access-4vdp8\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133528 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnnc4\" (UniqueName: \"kubernetes.io/projected/d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508-kube-api-access-mnnc4\") pod \"machine-approver-56656f9798-zdksh\" (UID: \"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133551 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ede88132-a555-4d42-a520-524081bcfcf8-serving-cert\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133571 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef82cf5f-56e2-4e0e-9a7f-674337086996-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133589 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ef82cf5f-56e2-4e0e-9a7f-674337086996-audit-dir\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133612 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0bfa8be2-ad8a-4253-9406-feb1dbd01a00-trusted-ca\") pod \"console-operator-58897d9998-bkbvc\" (UID: \"0bfa8be2-ad8a-4253-9406-feb1dbd01a00\") " pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133632 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwgxb\" (UniqueName: \"kubernetes.io/projected/5203d54b-a735-4118-bae0-7554299a98cf-kube-api-access-nwgxb\") pod \"machine-api-operator-5694c8668f-xc6q5\" (UID: \"5203d54b-a735-4118-bae0-7554299a98cf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133652 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdn4g\" (UniqueName: \"kubernetes.io/projected/ef82cf5f-56e2-4e0e-9a7f-674337086996-kube-api-access-mdn4g\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133673 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-image-import-ca\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133694 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025c284f-6fab-4bf3-8fba-63f663c2e621-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n8q46\" (UID: \"025c284f-6fab-4bf3-8fba-63f663c2e621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133714 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-config\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.133748 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9dt5\" (UniqueName: \"kubernetes.io/projected/025c284f-6fab-4bf3-8fba-63f663c2e621-kube-api-access-s9dt5\") pod \"openshift-apiserver-operator-796bbdcf4f-n8q46\" (UID: \"025c284f-6fab-4bf3-8fba-63f663c2e621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.141653 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5fc8p"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.143178 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.143647 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.144013 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.144389 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.144676 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.149029 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.149525 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.149775 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.150087 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.150270 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.150547 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.150840 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.151501 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.151551 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.151550 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.153568 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.154129 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.155644 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.155683 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.155732 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.155765 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.155793 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.155988 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.156073 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.156107 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.164693 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.164769 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zd4f9"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.164716 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.165616 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bt8sz"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.165993 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9mccd"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.166597 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mccd" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.169000 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.169104 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.170513 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.170665 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.170778 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.171077 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.173894 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-f64b6"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.175577 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.185424 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.215570 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.218480 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.218679 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.222283 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.230439 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.230851 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.237082 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.237263 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.238196 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.238301 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5203d54b-a735-4118-bae0-7554299a98cf-images\") pod \"machine-api-operator-5694c8668f-xc6q5\" (UID: \"5203d54b-a735-4118-bae0-7554299a98cf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.238346 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2fd7a59-87f6-4ae1-9d60-646916752cef-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2rhjz\" (UID: \"a2fd7a59-87f6-4ae1-9d60-646916752cef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241184 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5203d54b-a735-4118-bae0-7554299a98cf-images\") pod \"machine-api-operator-5694c8668f-xc6q5\" (UID: \"5203d54b-a735-4118-bae0-7554299a98cf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.238375 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241545 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/148b39e9-bdef-40c7-a6e1-eb1e922710f5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xr69x\" (UID: \"148b39e9-bdef-40c7-a6e1-eb1e922710f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241569 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4458bb2e-c5b0-4553-a59c-f9ede889b5f5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4cwvw\" (UID: \"4458bb2e-c5b0-4553-a59c-f9ede889b5f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241605 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ede88132-a555-4d42-a520-524081bcfcf8-service-ca-bundle\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241629 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-client-ca\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241654 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241683 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0b82afa3-8f94-41e2-828e-4debc9e73088-node-pullsecrets\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241707 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5203d54b-a735-4118-bae0-7554299a98cf-config\") pod \"machine-api-operator-5694c8668f-xc6q5\" (UID: \"5203d54b-a735-4118-bae0-7554299a98cf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241731 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ef82cf5f-56e2-4e0e-9a7f-674337086996-audit-policies\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241754 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwst6\" (UniqueName: \"kubernetes.io/projected/ede88132-a555-4d42-a520-524081bcfcf8-kube-api-access-bwst6\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241783 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241811 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508-auth-proxy-config\") pod \"machine-approver-56656f9798-zdksh\" (UID: \"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241839 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b82afa3-8f94-41e2-828e-4debc9e73088-encryption-config\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241861 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508-config\") pod \"machine-approver-56656f9798-zdksh\" (UID: \"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241888 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ee8a736-336a-4b9a-a5d3-df1d4da6da62-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p8bg5\" (UID: \"2ee8a736-336a-4b9a-a5d3-df1d4da6da62\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241951 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4458bb2e-c5b0-4553-a59c-f9ede889b5f5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4cwvw\" (UID: \"4458bb2e-c5b0-4553-a59c-f9ede889b5f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.241977 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/72fba804-18f2-4fae-addd-49c6b152c262-default-certificate\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242007 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnwcx\" (UniqueName: \"kubernetes.io/projected/0e8ec468-a940-452a-975b-60a761b9f44f-kube-api-access-lnwcx\") pod \"downloads-7954f5f757-w8vqq\" (UID: \"0e8ec468-a940-452a-975b-60a761b9f44f\") " pod="openshift-console/downloads-7954f5f757-w8vqq" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242029 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26ddacfd-315a-46a3-a9a1-7149df69ef84-client-ca\") pod \"route-controller-manager-6576b87f9c-bc7pv\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242050 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508-machine-approver-tls\") pod \"machine-approver-56656f9798-zdksh\" (UID: \"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242079 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ef82cf5f-56e2-4e0e-9a7f-674337086996-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242108 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede88132-a555-4d42-a520-524081bcfcf8-config\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242135 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ef82cf5f-56e2-4e0e-9a7f-674337086996-etcd-client\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242158 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv2qw\" (UniqueName: \"kubernetes.io/projected/0b82afa3-8f94-41e2-828e-4debc9e73088-kube-api-access-zv2qw\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242177 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-serving-cert\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242198 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bfa8be2-ad8a-4253-9406-feb1dbd01a00-serving-cert\") pod \"console-operator-58897d9998-bkbvc\" (UID: \"0bfa8be2-ad8a-4253-9406-feb1dbd01a00\") " pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242221 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72fba804-18f2-4fae-addd-49c6b152c262-service-ca-bundle\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242248 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242275 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242295 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkccl\" (UniqueName: \"kubernetes.io/projected/e6030804-d717-42c9-b2b2-8eaaadaddca0-kube-api-access-fkccl\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242334 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9395a191-b3a5-4b32-b463-1af135a25807-etcd-ca\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242362 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/025c284f-6fab-4bf3-8fba-63f663c2e621-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n8q46\" (UID: \"025c284f-6fab-4bf3-8fba-63f663c2e621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242385 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbw8f\" (UniqueName: \"kubernetes.io/projected/486af04c-0ffa-435d-8a8e-4867f8c0143e-kube-api-access-lbw8f\") pod \"multus-admission-controller-857f4d67dd-9mccd\" (UID: \"486af04c-0ffa-435d-8a8e-4867f8c0143e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mccd" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242412 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26ddacfd-315a-46a3-a9a1-7149df69ef84-config\") pod \"route-controller-manager-6576b87f9c-bc7pv\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242434 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ede88132-a555-4d42-a520-524081bcfcf8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242457 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242482 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8bf5c6f5-7b1f-4bf7-9d47-b181232c1107-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j5v6v\" (UID: \"8bf5c6f5-7b1f-4bf7-9d47-b181232c1107\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242535 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b82afa3-8f94-41e2-828e-4debc9e73088-audit-dir\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242564 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5203d54b-a735-4118-bae0-7554299a98cf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xc6q5\" (UID: \"5203d54b-a735-4118-bae0-7554299a98cf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242587 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9395a191-b3a5-4b32-b463-1af135a25807-etcd-client\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242611 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n992\" (UniqueName: \"kubernetes.io/projected/23e4847e-5b39-4a3b-aada-5d8c28c162e8-kube-api-access-5n992\") pod \"dns-operator-744455d44c-gc8mc\" (UID: \"23e4847e-5b39-4a3b-aada-5d8c28c162e8\") " pod="openshift-dns-operator/dns-operator-744455d44c-gc8mc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242632 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72fba804-18f2-4fae-addd-49c6b152c262-metrics-certs\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242652 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6030804-d717-42c9-b2b2-8eaaadaddca0-audit-dir\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242674 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9395a191-b3a5-4b32-b463-1af135a25807-config\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242703 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/148b39e9-bdef-40c7-a6e1-eb1e922710f5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xr69x\" (UID: \"148b39e9-bdef-40c7-a6e1-eb1e922710f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242726 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242760 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b82afa3-8f94-41e2-828e-4debc9e73088-serving-cert\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242785 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6kkm\" (UniqueName: \"kubernetes.io/projected/4458bb2e-c5b0-4553-a59c-f9ede889b5f5-kube-api-access-c6kkm\") pod \"cluster-image-registry-operator-dc59b4c8b-4cwvw\" (UID: \"4458bb2e-c5b0-4553-a59c-f9ede889b5f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242814 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ef82cf5f-56e2-4e0e-9a7f-674337086996-encryption-config\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242838 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bfa8be2-ad8a-4253-9406-feb1dbd01a00-config\") pod \"console-operator-58897d9998-bkbvc\" (UID: \"0bfa8be2-ad8a-4253-9406-feb1dbd01a00\") " pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242861 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23e4847e-5b39-4a3b-aada-5d8c28c162e8-metrics-tls\") pod \"dns-operator-744455d44c-gc8mc\" (UID: \"23e4847e-5b39-4a3b-aada-5d8c28c162e8\") " pod="openshift-dns-operator/dns-operator-744455d44c-gc8mc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242885 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2fd7a59-87f6-4ae1-9d60-646916752cef-metrics-tls\") pod \"ingress-operator-5b745b69d9-2rhjz\" (UID: \"a2fd7a59-87f6-4ae1-9d60-646916752cef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242963 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.242995 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnpt8\" (UniqueName: \"kubernetes.io/projected/8bf5c6f5-7b1f-4bf7-9d47-b181232c1107-kube-api-access-bnpt8\") pod \"openshift-config-operator-7777fb866f-j5v6v\" (UID: \"8bf5c6f5-7b1f-4bf7-9d47-b181232c1107\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.243049 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.243074 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2fd7a59-87f6-4ae1-9d60-646916752cef-trusted-ca\") pod \"ingress-operator-5b745b69d9-2rhjz\" (UID: \"a2fd7a59-87f6-4ae1-9d60-646916752cef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.243104 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zjlg\" (UniqueName: \"kubernetes.io/projected/26ddacfd-315a-46a3-a9a1-7149df69ef84-kube-api-access-5zjlg\") pod \"route-controller-manager-6576b87f9c-bc7pv\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.243128 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b82afa3-8f94-41e2-828e-4debc9e73088-etcd-client\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.243720 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.244601 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-audit\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.244639 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26ddacfd-315a-46a3-a9a1-7149df69ef84-serving-cert\") pod \"route-controller-manager-6576b87f9c-bc7pv\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.244667 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shl9g\" (UniqueName: \"kubernetes.io/projected/2ee8a736-336a-4b9a-a5d3-df1d4da6da62-kube-api-access-shl9g\") pod \"cluster-samples-operator-665b6dd947-p8bg5\" (UID: \"2ee8a736-336a-4b9a-a5d3-df1d4da6da62\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.244691 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t75gt\" (UniqueName: \"kubernetes.io/projected/72fba804-18f2-4fae-addd-49c6b152c262-kube-api-access-t75gt\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.244867 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx98p\" (UniqueName: \"kubernetes.io/projected/9395a191-b3a5-4b32-b463-1af135a25807-kube-api-access-dx98p\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.244898 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vdp8\" (UniqueName: \"kubernetes.io/projected/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-kube-api-access-4vdp8\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.244943 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnnc4\" (UniqueName: \"kubernetes.io/projected/d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508-kube-api-access-mnnc4\") pod \"machine-approver-56656f9798-zdksh\" (UID: \"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.244974 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/486af04c-0ffa-435d-8a8e-4867f8c0143e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9mccd\" (UID: \"486af04c-0ffa-435d-8a8e-4867f8c0143e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mccd" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245001 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ef82cf5f-56e2-4e0e-9a7f-674337086996-audit-dir\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245023 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ede88132-a555-4d42-a520-524081bcfcf8-serving-cert\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245047 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245082 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef82cf5f-56e2-4e0e-9a7f-674337086996-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245108 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0bfa8be2-ad8a-4253-9406-feb1dbd01a00-trusted-ca\") pod \"console-operator-58897d9998-bkbvc\" (UID: \"0bfa8be2-ad8a-4253-9406-feb1dbd01a00\") " pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245135 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/148b39e9-bdef-40c7-a6e1-eb1e922710f5-config\") pod \"kube-controller-manager-operator-78b949d7b-xr69x\" (UID: \"148b39e9-bdef-40c7-a6e1-eb1e922710f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245191 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/72fba804-18f2-4fae-addd-49c6b152c262-stats-auth\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245217 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0b82afa3-8f94-41e2-828e-4debc9e73088-node-pullsecrets\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245359 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245421 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-image-import-ca\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245444 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwgxb\" (UniqueName: \"kubernetes.io/projected/5203d54b-a735-4118-bae0-7554299a98cf-kube-api-access-nwgxb\") pod \"machine-api-operator-5694c8668f-xc6q5\" (UID: \"5203d54b-a735-4118-bae0-7554299a98cf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245467 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdn4g\" (UniqueName: \"kubernetes.io/projected/ef82cf5f-56e2-4e0e-9a7f-674337086996-kube-api-access-mdn4g\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245495 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k4lj\" (UniqueName: \"kubernetes.io/projected/a2fd7a59-87f6-4ae1-9d60-646916752cef-kube-api-access-8k4lj\") pod \"ingress-operator-5b745b69d9-2rhjz\" (UID: \"a2fd7a59-87f6-4ae1-9d60-646916752cef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245520 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bf5c6f5-7b1f-4bf7-9d47-b181232c1107-serving-cert\") pod \"openshift-config-operator-7777fb866f-j5v6v\" (UID: \"8bf5c6f5-7b1f-4bf7-9d47-b181232c1107\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245550 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025c284f-6fab-4bf3-8fba-63f663c2e621-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n8q46\" (UID: \"025c284f-6fab-4bf3-8fba-63f663c2e621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245570 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508-config\") pod \"machine-approver-56656f9798-zdksh\" (UID: \"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.245594 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-config\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.246005 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-client-ca\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.246543 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26ddacfd-315a-46a3-a9a1-7149df69ef84-client-ca\") pod \"route-controller-manager-6576b87f9c-bc7pv\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.246683 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-audit\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.247117 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ede88132-a555-4d42-a520-524081bcfcf8-service-ca-bundle\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.247126 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5203d54b-a735-4118-bae0-7554299a98cf-config\") pod \"machine-api-operator-5694c8668f-xc6q5\" (UID: \"5203d54b-a735-4118-bae0-7554299a98cf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.247198 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-config\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.247704 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ef82cf5f-56e2-4e0e-9a7f-674337086996-audit-policies\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.247704 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26ddacfd-315a-46a3-a9a1-7149df69ef84-config\") pod \"route-controller-manager-6576b87f9c-bc7pv\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.247740 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508-auth-proxy-config\") pod \"machine-approver-56656f9798-zdksh\" (UID: \"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.248483 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ef82cf5f-56e2-4e0e-9a7f-674337086996-audit-dir\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.248527 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede88132-a555-4d42-a520-524081bcfcf8-config\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.248589 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ef82cf5f-56e2-4e0e-9a7f-674337086996-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.248715 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef82cf5f-56e2-4e0e-9a7f-674337086996-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.250431 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-image-import-ca\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.251402 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/025c284f-6fab-4bf3-8fba-63f663c2e621-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n8q46\" (UID: \"025c284f-6fab-4bf3-8fba-63f663c2e621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.251722 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b82afa3-8f94-41e2-828e-4debc9e73088-encryption-config\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.251783 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0bfa8be2-ad8a-4253-9406-feb1dbd01a00-trusted-ca\") pod \"console-operator-58897d9998-bkbvc\" (UID: \"0bfa8be2-ad8a-4253-9406-feb1dbd01a00\") " pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.252336 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ede88132-a555-4d42-a520-524081bcfcf8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.252395 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bfa8be2-ad8a-4253-9406-feb1dbd01a00-config\") pod \"console-operator-58897d9998-bkbvc\" (UID: \"0bfa8be2-ad8a-4253-9406-feb1dbd01a00\") " pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.252403 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9dt5\" (UniqueName: \"kubernetes.io/projected/025c284f-6fab-4bf3-8fba-63f663c2e621-kube-api-access-s9dt5\") pod \"openshift-apiserver-operator-796bbdcf4f-n8q46\" (UID: \"025c284f-6fab-4bf3-8fba-63f663c2e621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.252484 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4458bb2e-c5b0-4553-a59c-f9ede889b5f5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4cwvw\" (UID: \"4458bb2e-c5b0-4553-a59c-f9ede889b5f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.252518 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9395a191-b3a5-4b32-b463-1af135a25807-etcd-service-ca\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.252541 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-etcd-serving-ca\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.252562 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9395a191-b3a5-4b32-b463-1af135a25807-serving-cert\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.252593 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rxfq\" (UniqueName: \"kubernetes.io/projected/0bfa8be2-ad8a-4253-9406-feb1dbd01a00-kube-api-access-5rxfq\") pod \"console-operator-58897d9998-bkbvc\" (UID: \"0bfa8be2-ad8a-4253-9406-feb1dbd01a00\") " pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.252615 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-audit-policies\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.252631 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.252653 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-config\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.252750 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-trusted-ca-bundle\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.252787 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef82cf5f-56e2-4e0e-9a7f-674337086996-serving-cert\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.252858 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.253463 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.253616 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.254049 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.254521 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025c284f-6fab-4bf3-8fba-63f663c2e621-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n8q46\" (UID: \"025c284f-6fab-4bf3-8fba-63f663c2e621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.254845 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f5cn6"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.255530 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f5cn6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.257410 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-etcd-serving-ca\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.257857 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-config\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.257887 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b82afa3-8f94-41e2-828e-4debc9e73088-audit-dir\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.259293 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b82afa3-8f94-41e2-828e-4debc9e73088-trusted-ca-bundle\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.260789 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.262165 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bnrn9"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.262878 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnrn9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.263404 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ef82cf5f-56e2-4e0e-9a7f-674337086996-encryption-config\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.265330 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef82cf5f-56e2-4e0e-9a7f-674337086996-serving-cert\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.265349 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b82afa3-8f94-41e2-828e-4debc9e73088-etcd-client\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.265769 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.265976 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5203d54b-a735-4118-bae0-7554299a98cf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xc6q5\" (UID: \"5203d54b-a735-4118-bae0-7554299a98cf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.266294 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bfa8be2-ad8a-4253-9406-feb1dbd01a00-serving-cert\") pod \"console-operator-58897d9998-bkbvc\" (UID: \"0bfa8be2-ad8a-4253-9406-feb1dbd01a00\") " pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.268432 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26ddacfd-315a-46a3-a9a1-7149df69ef84-serving-cert\") pod \"route-controller-manager-6576b87f9c-bc7pv\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.269482 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.271259 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-z5sff"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.271980 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.272367 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.272584 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.275210 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.275431 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.276190 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-xzj4q"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.277076 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.277811 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.278173 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ef82cf5f-56e2-4e0e-9a7f-674337086996-etcd-client\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.278521 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.278688 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508-machine-approver-tls\") pod \"machine-approver-56656f9798-zdksh\" (UID: \"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.278783 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-xzj4q" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.278886 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rmbm8"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.279340 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.279544 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.279596 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.279635 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.280440 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.280999 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.281496 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ede88132-a555-4d42-a520-524081bcfcf8-serving-cert\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.281484 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-serving-cert\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.281658 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b82afa3-8f94-41e2-828e-4debc9e73088-serving-cert\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.281875 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.281885 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.282801 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.283490 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.285559 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-852sw"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.286758 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.288266 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.288319 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rhk9v"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.288492 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-852sw" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.288672 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xc6q5"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.290871 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-w8vqq"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.292218 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bkbvc"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.293277 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gc8mc"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.293731 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.294395 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.295929 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.297270 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.299402 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.308439 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zd4f9"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.311470 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bnrn9"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.314177 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.314511 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9mccd"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.319053 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5fc8p"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.323569 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-njfl7"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.325492 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.325576 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.327055 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.329349 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.333034 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.333079 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.336906 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.336997 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.337011 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f5cn6"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.341617 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-z5sff"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.341673 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.341686 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-xzj4q"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.346217 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.346286 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.346300 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.350860 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bt8sz"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.350934 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rmbm8"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.350948 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.354801 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.354865 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-852sw"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.355902 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.355969 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356002 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnpt8\" (UniqueName: \"kubernetes.io/projected/8bf5c6f5-7b1f-4bf7-9d47-b181232c1107-kube-api-access-bnpt8\") pod \"openshift-config-operator-7777fb866f-j5v6v\" (UID: \"8bf5c6f5-7b1f-4bf7-9d47-b181232c1107\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356036 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2fd7a59-87f6-4ae1-9d60-646916752cef-trusted-ca\") pod \"ingress-operator-5b745b69d9-2rhjz\" (UID: \"a2fd7a59-87f6-4ae1-9d60-646916752cef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356066 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shl9g\" (UniqueName: \"kubernetes.io/projected/2ee8a736-336a-4b9a-a5d3-df1d4da6da62-kube-api-access-shl9g\") pod \"cluster-samples-operator-665b6dd947-p8bg5\" (UID: \"2ee8a736-336a-4b9a-a5d3-df1d4da6da62\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356085 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t75gt\" (UniqueName: \"kubernetes.io/projected/72fba804-18f2-4fae-addd-49c6b152c262-kube-api-access-t75gt\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356105 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx98p\" (UniqueName: \"kubernetes.io/projected/9395a191-b3a5-4b32-b463-1af135a25807-kube-api-access-dx98p\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356139 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/486af04c-0ffa-435d-8a8e-4867f8c0143e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9mccd\" (UID: \"486af04c-0ffa-435d-8a8e-4867f8c0143e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mccd" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356160 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356184 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/148b39e9-bdef-40c7-a6e1-eb1e922710f5-config\") pod \"kube-controller-manager-operator-78b949d7b-xr69x\" (UID: \"148b39e9-bdef-40c7-a6e1-eb1e922710f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356211 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/72fba804-18f2-4fae-addd-49c6b152c262-stats-auth\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356230 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356267 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k4lj\" (UniqueName: \"kubernetes.io/projected/a2fd7a59-87f6-4ae1-9d60-646916752cef-kube-api-access-8k4lj\") pod \"ingress-operator-5b745b69d9-2rhjz\" (UID: \"a2fd7a59-87f6-4ae1-9d60-646916752cef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356287 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bf5c6f5-7b1f-4bf7-9d47-b181232c1107-serving-cert\") pod \"openshift-config-operator-7777fb866f-j5v6v\" (UID: \"8bf5c6f5-7b1f-4bf7-9d47-b181232c1107\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356322 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4458bb2e-c5b0-4553-a59c-f9ede889b5f5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4cwvw\" (UID: \"4458bb2e-c5b0-4553-a59c-f9ede889b5f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356344 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9395a191-b3a5-4b32-b463-1af135a25807-serving-cert\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356367 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9395a191-b3a5-4b32-b463-1af135a25807-etcd-service-ca\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356397 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-audit-policies\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356422 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356445 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2fd7a59-87f6-4ae1-9d60-646916752cef-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2rhjz\" (UID: \"a2fd7a59-87f6-4ae1-9d60-646916752cef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356464 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356485 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/148b39e9-bdef-40c7-a6e1-eb1e922710f5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xr69x\" (UID: \"148b39e9-bdef-40c7-a6e1-eb1e922710f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356506 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4458bb2e-c5b0-4553-a59c-f9ede889b5f5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4cwvw\" (UID: \"4458bb2e-c5b0-4553-a59c-f9ede889b5f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356528 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356559 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ee8a736-336a-4b9a-a5d3-df1d4da6da62-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p8bg5\" (UID: \"2ee8a736-336a-4b9a-a5d3-df1d4da6da62\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356590 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4458bb2e-c5b0-4553-a59c-f9ede889b5f5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4cwvw\" (UID: \"4458bb2e-c5b0-4553-a59c-f9ede889b5f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356613 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/72fba804-18f2-4fae-addd-49c6b152c262-default-certificate\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356655 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72fba804-18f2-4fae-addd-49c6b152c262-service-ca-bundle\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356675 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356695 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356715 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkccl\" (UniqueName: \"kubernetes.io/projected/e6030804-d717-42c9-b2b2-8eaaadaddca0-kube-api-access-fkccl\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356737 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbw8f\" (UniqueName: \"kubernetes.io/projected/486af04c-0ffa-435d-8a8e-4867f8c0143e-kube-api-access-lbw8f\") pod \"multus-admission-controller-857f4d67dd-9mccd\" (UID: \"486af04c-0ffa-435d-8a8e-4867f8c0143e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mccd" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356757 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9395a191-b3a5-4b32-b463-1af135a25807-etcd-ca\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356778 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356801 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8bf5c6f5-7b1f-4bf7-9d47-b181232c1107-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j5v6v\" (UID: \"8bf5c6f5-7b1f-4bf7-9d47-b181232c1107\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356822 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9395a191-b3a5-4b32-b463-1af135a25807-etcd-client\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356844 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n992\" (UniqueName: \"kubernetes.io/projected/23e4847e-5b39-4a3b-aada-5d8c28c162e8-kube-api-access-5n992\") pod \"dns-operator-744455d44c-gc8mc\" (UID: \"23e4847e-5b39-4a3b-aada-5d8c28c162e8\") " pod="openshift-dns-operator/dns-operator-744455d44c-gc8mc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356863 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72fba804-18f2-4fae-addd-49c6b152c262-metrics-certs\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356884 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6030804-d717-42c9-b2b2-8eaaadaddca0-audit-dir\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356905 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9395a191-b3a5-4b32-b463-1af135a25807-config\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356943 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/148b39e9-bdef-40c7-a6e1-eb1e922710f5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xr69x\" (UID: \"148b39e9-bdef-40c7-a6e1-eb1e922710f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356967 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.356995 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6kkm\" (UniqueName: \"kubernetes.io/projected/4458bb2e-c5b0-4553-a59c-f9ede889b5f5-kube-api-access-c6kkm\") pod \"cluster-image-registry-operator-dc59b4c8b-4cwvw\" (UID: \"4458bb2e-c5b0-4553-a59c-f9ede889b5f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.357017 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23e4847e-5b39-4a3b-aada-5d8c28c162e8-metrics-tls\") pod \"dns-operator-744455d44c-gc8mc\" (UID: \"23e4847e-5b39-4a3b-aada-5d8c28c162e8\") " pod="openshift-dns-operator/dns-operator-744455d44c-gc8mc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.357041 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2fd7a59-87f6-4ae1-9d60-646916752cef-metrics-tls\") pod \"ingress-operator-5b745b69d9-2rhjz\" (UID: \"a2fd7a59-87f6-4ae1-9d60-646916752cef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.358586 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4458bb2e-c5b0-4553-a59c-f9ede889b5f5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4cwvw\" (UID: \"4458bb2e-c5b0-4553-a59c-f9ede889b5f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.359139 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.359418 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.360579 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-njfl7"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.360641 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9dn6s"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.361066 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2fd7a59-87f6-4ae1-9d60-646916752cef-trusted-ca\") pod \"ingress-operator-5b745b69d9-2rhjz\" (UID: \"a2fd7a59-87f6-4ae1-9d60-646916752cef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.361465 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-k2v76"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.362083 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-k2v76" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.362177 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/148b39e9-bdef-40c7-a6e1-eb1e922710f5-config\") pod \"kube-controller-manager-operator-78b949d7b-xr69x\" (UID: \"148b39e9-bdef-40c7-a6e1-eb1e922710f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.362470 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9dn6s" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.363104 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-audit-policies\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.363468 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-k2v76"] Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.364603 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6030804-d717-42c9-b2b2-8eaaadaddca0-audit-dir\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.365199 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.367044 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8bf5c6f5-7b1f-4bf7-9d47-b181232c1107-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j5v6v\" (UID: \"8bf5c6f5-7b1f-4bf7-9d47-b181232c1107\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.370795 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/148b39e9-bdef-40c7-a6e1-eb1e922710f5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xr69x\" (UID: \"148b39e9-bdef-40c7-a6e1-eb1e922710f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.371390 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.371782 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.373945 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.380329 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/486af04c-0ffa-435d-8a8e-4867f8c0143e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9mccd\" (UID: \"486af04c-0ffa-435d-8a8e-4867f8c0143e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mccd" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.380361 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.380624 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4458bb2e-c5b0-4553-a59c-f9ede889b5f5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4cwvw\" (UID: \"4458bb2e-c5b0-4553-a59c-f9ede889b5f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.380965 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ee8a736-336a-4b9a-a5d3-df1d4da6da62-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p8bg5\" (UID: \"2ee8a736-336a-4b9a-a5d3-df1d4da6da62\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.381411 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.381569 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2fd7a59-87f6-4ae1-9d60-646916752cef-metrics-tls\") pod \"ingress-operator-5b745b69d9-2rhjz\" (UID: \"a2fd7a59-87f6-4ae1-9d60-646916752cef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.388074 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.394880 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23e4847e-5b39-4a3b-aada-5d8c28c162e8-metrics-tls\") pod \"dns-operator-744455d44c-gc8mc\" (UID: \"23e4847e-5b39-4a3b-aada-5d8c28c162e8\") " pod="openshift-dns-operator/dns-operator-744455d44c-gc8mc" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.396586 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.396782 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.398593 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bf5c6f5-7b1f-4bf7-9d47-b181232c1107-serving-cert\") pod \"openshift-config-operator-7777fb866f-j5v6v\" (UID: \"8bf5c6f5-7b1f-4bf7-9d47-b181232c1107\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.398859 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.398965 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.399276 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.407172 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.408938 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9395a191-b3a5-4b32-b463-1af135a25807-serving-cert\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.414155 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.417908 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9395a191-b3a5-4b32-b463-1af135a25807-etcd-client\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.433769 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.453397 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.456612 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9395a191-b3a5-4b32-b463-1af135a25807-config\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.473726 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.481428 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9395a191-b3a5-4b32-b463-1af135a25807-etcd-ca\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.493903 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.500900 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9395a191-b3a5-4b32-b463-1af135a25807-etcd-service-ca\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.513896 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.534702 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.554261 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.569226 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/72fba804-18f2-4fae-addd-49c6b152c262-default-certificate\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.574406 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.584465 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/72fba804-18f2-4fae-addd-49c6b152c262-stats-auth\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.594551 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.598032 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72fba804-18f2-4fae-addd-49c6b152c262-metrics-certs\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.614229 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.624372 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72fba804-18f2-4fae-addd-49c6b152c262-service-ca-bundle\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.633823 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.693762 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.713821 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.733434 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.754589 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.774294 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.794278 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.814129 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.834554 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.871378 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwst6\" (UniqueName: \"kubernetes.io/projected/ede88132-a555-4d42-a520-524081bcfcf8-kube-api-access-bwst6\") pod \"authentication-operator-69f744f599-rhk9v\" (UID: \"ede88132-a555-4d42-a520-524081bcfcf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.891971 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnwcx\" (UniqueName: \"kubernetes.io/projected/0e8ec468-a940-452a-975b-60a761b9f44f-kube-api-access-lnwcx\") pod \"downloads-7954f5f757-w8vqq\" (UID: \"0e8ec468-a940-452a-975b-60a761b9f44f\") " pod="openshift-console/downloads-7954f5f757-w8vqq" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.894403 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.911114 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.931112 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv2qw\" (UniqueName: \"kubernetes.io/projected/0b82afa3-8f94-41e2-828e-4debc9e73088-kube-api-access-zv2qw\") pod \"apiserver-76f77b778f-klns8\" (UID: \"0b82afa3-8f94-41e2-828e-4debc9e73088\") " pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.950939 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vdp8\" (UniqueName: \"kubernetes.io/projected/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-kube-api-access-4vdp8\") pod \"controller-manager-879f6c89f-6696n\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.957318 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.976566 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnnc4\" (UniqueName: \"kubernetes.io/projected/d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508-kube-api-access-mnnc4\") pod \"machine-approver-56656f9798-zdksh\" (UID: \"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.989406 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-w8vqq" Jan 29 08:43:29 crc kubenswrapper[4895]: I0129 08:43:29.989498 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwgxb\" (UniqueName: \"kubernetes.io/projected/5203d54b-a735-4118-bae0-7554299a98cf-kube-api-access-nwgxb\") pod \"machine-api-operator-5694c8668f-xc6q5\" (UID: \"5203d54b-a735-4118-bae0-7554299a98cf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.012232 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdn4g\" (UniqueName: \"kubernetes.io/projected/ef82cf5f-56e2-4e0e-9a7f-674337086996-kube-api-access-mdn4g\") pod \"apiserver-7bbb656c7d-8dlcj\" (UID: \"ef82cf5f-56e2-4e0e-9a7f-674337086996\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.033855 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rxfq\" (UniqueName: \"kubernetes.io/projected/0bfa8be2-ad8a-4253-9406-feb1dbd01a00-kube-api-access-5rxfq\") pod \"console-operator-58897d9998-bkbvc\" (UID: \"0bfa8be2-ad8a-4253-9406-feb1dbd01a00\") " pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.034452 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.056102 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.057269 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.118836 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.131731 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.157241 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.157446 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.162280 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.162459 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.162457 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.176109 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.198762 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.229610 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.233597 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zjlg\" (UniqueName: \"kubernetes.io/projected/26ddacfd-315a-46a3-a9a1-7149df69ef84-kube-api-access-5zjlg\") pod \"route-controller-manager-6576b87f9c-bc7pv\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.237507 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.264565 4895 request.go:700] Waited for 1.008505969s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&limit=500&resourceVersion=0 Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.264753 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.267575 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.292149 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9dt5\" (UniqueName: \"kubernetes.io/projected/025c284f-6fab-4bf3-8fba-63f663c2e621-kube-api-access-s9dt5\") pod \"openshift-apiserver-operator-796bbdcf4f-n8q46\" (UID: \"025c284f-6fab-4bf3-8fba-63f663c2e621\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.294028 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.317895 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.334494 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.371055 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.374285 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.394111 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.415650 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.433764 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-klns8"] Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.433822 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rhk9v"] Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.435036 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.461345 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.465481 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.466640 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-w8vqq"] Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.485341 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.494228 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.512638 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6696n"] Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.515256 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.533932 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.553327 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.567684 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.570532 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj"] Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.575933 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.602739 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.615467 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.629936 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xc6q5"] Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.634815 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.662873 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 08:43:30 crc kubenswrapper[4895]: W0129 08:43:30.668832 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5203d54b_a735_4118_bae0_7554299a98cf.slice/crio-896f621a7f320af1916e6652db4902b70d31cd0c8eb51c42bdcbb56deb88de00 WatchSource:0}: Error finding container 896f621a7f320af1916e6652db4902b70d31cd0c8eb51c42bdcbb56deb88de00: Status 404 returned error can't find the container with id 896f621a7f320af1916e6652db4902b70d31cd0c8eb51c42bdcbb56deb88de00 Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.674841 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.678348 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bkbvc"] Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.696318 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.715395 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.740816 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.760231 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.774868 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.793852 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.816081 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.839509 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.841728 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv"] Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.853884 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.868572 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46"] Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.873770 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 08:43:30 crc kubenswrapper[4895]: W0129 08:43:30.877941 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26ddacfd_315a_46a3_a9a1_7149df69ef84.slice/crio-e8342b2ef985d0c9d4a04bc39693ff605146ab3057b172e40e64a6668652994a WatchSource:0}: Error finding container e8342b2ef985d0c9d4a04bc39693ff605146ab3057b172e40e64a6668652994a: Status 404 returned error can't find the container with id e8342b2ef985d0c9d4a04bc39693ff605146ab3057b172e40e64a6668652994a Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.893792 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.914452 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.933933 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.945408 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bkbvc" event={"ID":"0bfa8be2-ad8a-4253-9406-feb1dbd01a00","Type":"ContainerStarted","Data":"bd3fb5f9cc431922865363637a53ebbcccac5a67f205d3d5aed22f193a1bb3bf"} Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.947215 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" event={"ID":"ede88132-a555-4d42-a520-524081bcfcf8","Type":"ContainerStarted","Data":"b7b457d6e13f47c6548cb1030f6c9df583b3ea624490ecdeb1f331253d518539"} Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.947297 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" event={"ID":"ede88132-a555-4d42-a520-524081bcfcf8","Type":"ContainerStarted","Data":"22a9713c26363c6459efd8462f776a840d634152fde488077177b0c4af06be7b"} Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.950482 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-klns8" event={"ID":"0b82afa3-8f94-41e2-828e-4debc9e73088","Type":"ContainerStarted","Data":"fcb5a77af16b4ad155b683d4f63994abdd5c64c285ae1df35f0a921cd22810da"} Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.953820 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" event={"ID":"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508","Type":"ContainerStarted","Data":"bdcea93e3b30e72039e0c1512e0ac6c2a5e313218a331da5ba23219c9826d37a"} Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.954018 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" event={"ID":"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508","Type":"ContainerStarted","Data":"c03c5afa478c37999e84e86e06c7138c84ad53c086c2f8a9eed69b8b179eef92"} Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.953829 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.955646 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46" event={"ID":"025c284f-6fab-4bf3-8fba-63f663c2e621","Type":"ContainerStarted","Data":"1ce206aeb551a09dd252880a3ff6d494e489cdc3b51e52a0056028e4a9d4ac78"} Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.956387 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" event={"ID":"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb","Type":"ContainerStarted","Data":"dee3c2c17e48676793a954c9e989ac67f8dd983bb4d35988c574fe234a0d7443"} Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.957618 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" event={"ID":"5203d54b-a735-4118-bae0-7554299a98cf","Type":"ContainerStarted","Data":"896f621a7f320af1916e6652db4902b70d31cd0c8eb51c42bdcbb56deb88de00"} Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.958882 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" event={"ID":"26ddacfd-315a-46a3-a9a1-7149df69ef84","Type":"ContainerStarted","Data":"e8342b2ef985d0c9d4a04bc39693ff605146ab3057b172e40e64a6668652994a"} Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.960141 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" event={"ID":"ef82cf5f-56e2-4e0e-9a7f-674337086996","Type":"ContainerStarted","Data":"bb6d70b9e596db3d604b297387865d22bbe170bc6454e6d10b8aec03e5b66c87"} Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.962409 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-w8vqq" event={"ID":"0e8ec468-a940-452a-975b-60a761b9f44f","Type":"ContainerStarted","Data":"2353ca9c1453cfa31debb0617fc987d1aed5ac868566b262915f1a057f75c03c"} Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.962495 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-w8vqq" event={"ID":"0e8ec468-a940-452a-975b-60a761b9f44f","Type":"ContainerStarted","Data":"b14d93f915c02637bcbfa1df9cd73b44c83060392f6f3c9eb34e4b7d22b45a12"} Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.974469 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 08:43:30 crc kubenswrapper[4895]: I0129 08:43:30.994306 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.015614 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.034307 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.054480 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.074787 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.094177 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.113947 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.133908 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.153985 4895 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.173272 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.194352 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.239481 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx98p\" (UniqueName: \"kubernetes.io/projected/9395a191-b3a5-4b32-b463-1af135a25807-kube-api-access-dx98p\") pod \"etcd-operator-b45778765-zd4f9\" (UID: \"9395a191-b3a5-4b32-b463-1af135a25807\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.255559 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnpt8\" (UniqueName: \"kubernetes.io/projected/8bf5c6f5-7b1f-4bf7-9d47-b181232c1107-kube-api-access-bnpt8\") pod \"openshift-config-operator-7777fb866f-j5v6v\" (UID: \"8bf5c6f5-7b1f-4bf7-9d47-b181232c1107\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.269801 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k4lj\" (UniqueName: \"kubernetes.io/projected/a2fd7a59-87f6-4ae1-9d60-646916752cef-kube-api-access-8k4lj\") pod \"ingress-operator-5b745b69d9-2rhjz\" (UID: \"a2fd7a59-87f6-4ae1-9d60-646916752cef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.272540 4895 request.go:700] Waited for 1.911742567s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.285112 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.294589 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shl9g\" (UniqueName: \"kubernetes.io/projected/2ee8a736-336a-4b9a-a5d3-df1d4da6da62-kube-api-access-shl9g\") pod \"cluster-samples-operator-665b6dd947-p8bg5\" (UID: \"2ee8a736-336a-4b9a-a5d3-df1d4da6da62\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.308085 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t75gt\" (UniqueName: \"kubernetes.io/projected/72fba804-18f2-4fae-addd-49c6b152c262-kube-api-access-t75gt\") pod \"router-default-5444994796-f64b6\" (UID: \"72fba804-18f2-4fae-addd-49c6b152c262\") " pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.313330 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.349984 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2fd7a59-87f6-4ae1-9d60-646916752cef-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2rhjz\" (UID: \"a2fd7a59-87f6-4ae1-9d60-646916752cef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.357868 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.373561 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.394639 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.415980 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.436126 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.454217 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.490452 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n992\" (UniqueName: \"kubernetes.io/projected/23e4847e-5b39-4a3b-aada-5d8c28c162e8-kube-api-access-5n992\") pod \"dns-operator-744455d44c-gc8mc\" (UID: \"23e4847e-5b39-4a3b-aada-5d8c28c162e8\") " pod="openshift-dns-operator/dns-operator-744455d44c-gc8mc" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.503280 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gc8mc" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.514406 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/148b39e9-bdef-40c7-a6e1-eb1e922710f5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xr69x\" (UID: \"148b39e9-bdef-40c7-a6e1-eb1e922710f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.519771 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.533292 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.536224 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.537296 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4458bb2e-c5b0-4553-a59c-f9ede889b5f5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4cwvw\" (UID: \"4458bb2e-c5b0-4553-a59c-f9ede889b5f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.550417 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbw8f\" (UniqueName: \"kubernetes.io/projected/486af04c-0ffa-435d-8a8e-4867f8c0143e-kube-api-access-lbw8f\") pod \"multus-admission-controller-857f4d67dd-9mccd\" (UID: \"486af04c-0ffa-435d-8a8e-4867f8c0143e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mccd" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.565647 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.567858 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zd4f9"] Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.569206 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkccl\" (UniqueName: \"kubernetes.io/projected/e6030804-d717-42c9-b2b2-8eaaadaddca0-kube-api-access-fkccl\") pod \"oauth-openshift-558db77b4-5fc8p\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.569514 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mccd" Jan 29 08:43:31 crc kubenswrapper[4895]: W0129 08:43:31.578530 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9395a191_b3a5_4b32_b463_1af135a25807.slice/crio-46a7078e4ecbc9f9b1533d65e9ae920d59e327628e2927eb3425290c394dda95 WatchSource:0}: Error finding container 46a7078e4ecbc9f9b1533d65e9ae920d59e327628e2927eb3425290c394dda95: Status 404 returned error can't find the container with id 46a7078e4ecbc9f9b1533d65e9ae920d59e327628e2927eb3425290c394dda95 Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.593248 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6kkm\" (UniqueName: \"kubernetes.io/projected/4458bb2e-c5b0-4553-a59c-f9ede889b5f5-kube-api-access-c6kkm\") pod \"cluster-image-registry-operator-dc59b4c8b-4cwvw\" (UID: \"4458bb2e-c5b0-4553-a59c-f9ede889b5f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.596583 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:31 crc kubenswrapper[4895]: W0129 08:43:31.678945 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72fba804_18f2_4fae_addd_49c6b152c262.slice/crio-60592f893ddd87a42be4c3c4826d5ee6fc369f87129acb5ffd16864b1d6fb37c WatchSource:0}: Error finding container 60592f893ddd87a42be4c3c4826d5ee6fc369f87129acb5ffd16864b1d6fb37c: Status 404 returned error can't find the container with id 60592f893ddd87a42be4c3c4826d5ee6fc369f87129acb5ffd16864b1d6fb37c Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.693379 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/766285e2-63c4-4073-9b24-d5fbf4b26638-registry-certificates\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.693454 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-registry-tls\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.693503 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/766285e2-63c4-4073-9b24-d5fbf4b26638-trusted-ca\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.693523 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/766285e2-63c4-4073-9b24-d5fbf4b26638-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.693547 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-bound-sa-token\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.693610 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbxtd\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-kube-api-access-gbxtd\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.693654 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.693675 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/766285e2-63c4-4073-9b24-d5fbf4b26638-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: E0129 08:43:31.698224 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:32.198197483 +0000 UTC m=+153.839705699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.795203 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:31 crc kubenswrapper[4895]: E0129 08:43:31.795407 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:32.295372753 +0000 UTC m=+153.936880899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.795461 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/381231a0-87e3-4e8d-818a-d126f2a73d5f-certs\") pod \"machine-config-server-9dn6s\" (UID: \"381231a0-87e3-4e8d-818a-d126f2a73d5f\") " pod="openshift-machine-config-operator/machine-config-server-9dn6s" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.795496 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-csi-data-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.795523 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8vqh\" (UniqueName: \"kubernetes.io/projected/3aab1def-9cb4-4b52-b6d5-83b033e82b62-kube-api-access-q8vqh\") pod \"ingress-canary-k2v76\" (UID: \"3aab1def-9cb4-4b52-b6d5-83b033e82b62\") " pod="openshift-ingress-canary/ingress-canary-k2v76" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.795585 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/766285e2-63c4-4073-9b24-d5fbf4b26638-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.795614 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-serving-cert\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.795645 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx8vm\" (UniqueName: \"kubernetes.io/projected/5c48be33-f0d6-4b78-9779-fec2a6244f89-kube-api-access-mx8vm\") pod \"dns-default-852sw\" (UID: \"5c48be33-f0d6-4b78-9779-fec2a6244f89\") " pod="openshift-dns/dns-default-852sw" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.796874 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-registration-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.796940 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r47v2\" (UniqueName: \"kubernetes.io/projected/8ebf4fd3-9460-4809-aa41-7db3a1bc0032-kube-api-access-r47v2\") pod \"olm-operator-6b444d44fb-4zbst\" (UID: \"8ebf4fd3-9460-4809-aa41-7db3a1bc0032\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.796987 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a967adb-d943-4101-8eed-964b97623a5c-proxy-tls\") pod \"machine-config-operator-74547568cd-v4mz4\" (UID: \"9a967adb-d943-4101-8eed-964b97623a5c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797022 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdxql\" (UniqueName: \"kubernetes.io/projected/13c23359-7d69-4f3c-b89a-a25bee602474-kube-api-access-wdxql\") pod \"control-plane-machine-set-operator-78cbb6b69f-f5cn6\" (UID: \"13c23359-7d69-4f3c-b89a-a25bee602474\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f5cn6" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797049 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8538d829-1294-4207-b88d-ae6294983b78-config\") pod \"kube-apiserver-operator-766d6c64bb-l9qrr\" (UID: \"8538d829-1294-4207-b88d-ae6294983b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797114 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d96d68-0600-4d2c-8f16-3abf329f8483-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lmszx\" (UID: \"98d96d68-0600-4d2c-8f16-3abf329f8483\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797140 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c48be33-f0d6-4b78-9779-fec2a6244f89-metrics-tls\") pod \"dns-default-852sw\" (UID: \"5c48be33-f0d6-4b78-9779-fec2a6244f89\") " pod="openshift-dns/dns-default-852sw" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797179 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a967adb-d943-4101-8eed-964b97623a5c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-v4mz4\" (UID: \"9a967adb-d943-4101-8eed-964b97623a5c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797210 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-socket-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797253 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gst68\" (UniqueName: \"kubernetes.io/projected/9bce819e-baa7-403a-8091-66dbc41187af-kube-api-access-gst68\") pod \"service-ca-operator-777779d784-bhjbb\" (UID: \"9bce819e-baa7-403a-8091-66dbc41187af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797279 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d6da3558-9eb3-457a-b1f6-daaaf39ed8ae-srv-cert\") pod \"catalog-operator-68c6474976-4fjjn\" (UID: \"d6da3558-9eb3-457a-b1f6-daaaf39ed8ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797326 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjvc7\" (UniqueName: \"kubernetes.io/projected/69349a4a-2b55-4b42-9baf-bd951db8643a-kube-api-access-tjvc7\") pod \"service-ca-9c57cc56f-xzj4q\" (UID: \"69349a4a-2b55-4b42-9baf-bd951db8643a\") " pod="openshift-service-ca/service-ca-9c57cc56f-xzj4q" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797383 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797412 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13c23359-7d69-4f3c-b89a-a25bee602474-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f5cn6\" (UID: \"13c23359-7d69-4f3c-b89a-a25bee602474\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f5cn6" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797458 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-oauth-config\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797495 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bce819e-baa7-403a-8091-66dbc41187af-serving-cert\") pod \"service-ca-operator-777779d784-bhjbb\" (UID: \"9bce819e-baa7-403a-8091-66dbc41187af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797518 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ebf4fd3-9460-4809-aa41-7db3a1bc0032-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4zbst\" (UID: \"8ebf4fd3-9460-4809-aa41-7db3a1bc0032\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797629 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d42bddeb-f93a-4603-a38e-1016ca2b3a03-secret-volume\") pod \"collect-profiles-29494590-2xwwg\" (UID: \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797653 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c437b17-e286-4520-bde1-4e13376343e3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jpb5z\" (UID: \"6c437b17-e286-4520-bde1-4e13376343e3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797701 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6560d1c1-3b50-45b8-9ed9-d43d0784efba-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c82ws\" (UID: \"6560d1c1-3b50-45b8-9ed9-d43d0784efba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797728 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98d96d68-0600-4d2c-8f16-3abf329f8483-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lmszx\" (UID: \"98d96d68-0600-4d2c-8f16-3abf329f8483\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797752 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48m8g\" (UniqueName: \"kubernetes.io/projected/6c437b17-e286-4520-bde1-4e13376343e3-kube-api-access-48m8g\") pod \"machine-config-controller-84d6567774-jpb5z\" (UID: \"6c437b17-e286-4520-bde1-4e13376343e3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797824 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bmk4\" (UniqueName: \"kubernetes.io/projected/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-kube-api-access-6bmk4\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797870 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f6088a3-2691-4029-a576-2a5abcd3b107-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rmbm8\" (UID: \"8f6088a3-2691-4029-a576-2a5abcd3b107\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797897 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c437b17-e286-4520-bde1-4e13376343e3-proxy-tls\") pod \"machine-config-controller-84d6567774-jpb5z\" (UID: \"6c437b17-e286-4520-bde1-4e13376343e3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.797963 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-oauth-serving-cert\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798022 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/69349a4a-2b55-4b42-9baf-bd951db8643a-signing-cabundle\") pod \"service-ca-9c57cc56f-xzj4q\" (UID: \"69349a4a-2b55-4b42-9baf-bd951db8643a\") " pod="openshift-service-ca/service-ca-9c57cc56f-xzj4q" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798076 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733-webhook-cert\") pod \"packageserver-d55dfcdfc-cchbc\" (UID: \"68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798100 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3aab1def-9cb4-4b52-b6d5-83b033e82b62-cert\") pod \"ingress-canary-k2v76\" (UID: \"3aab1def-9cb4-4b52-b6d5-83b033e82b62\") " pod="openshift-ingress-canary/ingress-canary-k2v76" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798128 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-registry-tls\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798155 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f6088a3-2691-4029-a576-2a5abcd3b107-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rmbm8\" (UID: \"8f6088a3-2691-4029-a576-2a5abcd3b107\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798210 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bce819e-baa7-403a-8091-66dbc41187af-config\") pod \"service-ca-operator-777779d784-bhjbb\" (UID: \"9bce819e-baa7-403a-8091-66dbc41187af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798243 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-plugins-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798268 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ndgf\" (UniqueName: \"kubernetes.io/projected/8f6088a3-2691-4029-a576-2a5abcd3b107-kube-api-access-4ndgf\") pod \"marketplace-operator-79b997595-rmbm8\" (UID: \"8f6088a3-2691-4029-a576-2a5abcd3b107\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798313 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/766285e2-63c4-4073-9b24-d5fbf4b26638-trusted-ca\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798340 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2gmw\" (UniqueName: \"kubernetes.io/projected/d6ad1a82-f4da-4591-9622-f1611b8133d9-kube-api-access-c2gmw\") pod \"migrator-59844c95c7-bnrn9\" (UID: \"d6ad1a82-f4da-4591-9622-f1611b8133d9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnrn9" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798368 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph477\" (UniqueName: \"kubernetes.io/projected/d6da3558-9eb3-457a-b1f6-daaaf39ed8ae-kube-api-access-ph477\") pod \"catalog-operator-68c6474976-4fjjn\" (UID: \"d6da3558-9eb3-457a-b1f6-daaaf39ed8ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798413 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ebf4fd3-9460-4809-aa41-7db3a1bc0032-srv-cert\") pod \"olm-operator-6b444d44fb-4zbst\" (UID: \"8ebf4fd3-9460-4809-aa41-7db3a1bc0032\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798439 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d42bddeb-f93a-4603-a38e-1016ca2b3a03-config-volume\") pod \"collect-profiles-29494590-2xwwg\" (UID: \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798512 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-bound-sa-token\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798568 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8538d829-1294-4207-b88d-ae6294983b78-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-l9qrr\" (UID: \"8538d829-1294-4207-b88d-ae6294983b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798597 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f526578-7ac2-4afb-9f56-4ac8c15627f7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-x6gfn\" (UID: \"9f526578-7ac2-4afb-9f56-4ac8c15627f7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798656 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d96d68-0600-4d2c-8f16-3abf329f8483-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lmszx\" (UID: \"98d96d68-0600-4d2c-8f16-3abf329f8483\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798724 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-mountpoint-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798775 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz9vz\" (UniqueName: \"kubernetes.io/projected/aa8604f8-8033-4126-bbaf-e2648ed77680-kube-api-access-kz9vz\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhx9c\" (UID: \"aa8604f8-8033-4126-bbaf-e2648ed77680\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798812 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa8604f8-8033-4126-bbaf-e2648ed77680-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhx9c\" (UID: \"aa8604f8-8033-4126-bbaf-e2648ed77680\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798840 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a967adb-d943-4101-8eed-964b97623a5c-images\") pod \"machine-config-operator-74547568cd-v4mz4\" (UID: \"9a967adb-d943-4101-8eed-964b97623a5c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798868 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-service-ca\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798952 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-config\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.798989 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbxtd\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-kube-api-access-gbxtd\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799017 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa8604f8-8033-4126-bbaf-e2648ed77680-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhx9c\" (UID: \"aa8604f8-8033-4126-bbaf-e2648ed77680\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799045 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l94rb\" (UniqueName: \"kubernetes.io/projected/6560d1c1-3b50-45b8-9ed9-d43d0784efba-kube-api-access-l94rb\") pod \"package-server-manager-789f6589d5-c82ws\" (UID: \"6560d1c1-3b50-45b8-9ed9-d43d0784efba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799091 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6v79\" (UniqueName: \"kubernetes.io/projected/9f526578-7ac2-4afb-9f56-4ac8c15627f7-kube-api-access-v6v79\") pod \"openshift-controller-manager-operator-756b6f6bc6-x6gfn\" (UID: \"9f526578-7ac2-4afb-9f56-4ac8c15627f7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799150 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f526578-7ac2-4afb-9f56-4ac8c15627f7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-x6gfn\" (UID: \"9f526578-7ac2-4afb-9f56-4ac8c15627f7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799175 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp5tr\" (UniqueName: \"kubernetes.io/projected/381231a0-87e3-4e8d-818a-d126f2a73d5f-kube-api-access-kp5tr\") pod \"machine-config-server-9dn6s\" (UID: \"381231a0-87e3-4e8d-818a-d126f2a73d5f\") " pod="openshift-machine-config-operator/machine-config-server-9dn6s" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799202 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/766285e2-63c4-4073-9b24-d5fbf4b26638-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799284 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733-tmpfs\") pod \"packageserver-d55dfcdfc-cchbc\" (UID: \"68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799310 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw74g\" (UniqueName: \"kubernetes.io/projected/68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733-kube-api-access-kw74g\") pod \"packageserver-d55dfcdfc-cchbc\" (UID: \"68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799352 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn2xm\" (UniqueName: \"kubernetes.io/projected/40ec3f19-6989-4c7c-92e2-1d1501a75b24-kube-api-access-gn2xm\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799437 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733-apiservice-cert\") pod \"packageserver-d55dfcdfc-cchbc\" (UID: \"68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799480 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d6da3558-9eb3-457a-b1f6-daaaf39ed8ae-profile-collector-cert\") pod \"catalog-operator-68c6474976-4fjjn\" (UID: \"d6da3558-9eb3-457a-b1f6-daaaf39ed8ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799530 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c48be33-f0d6-4b78-9779-fec2a6244f89-config-volume\") pod \"dns-default-852sw\" (UID: \"5c48be33-f0d6-4b78-9779-fec2a6244f89\") " pod="openshift-dns/dns-default-852sw" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799618 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmpkq\" (UniqueName: \"kubernetes.io/projected/9a967adb-d943-4101-8eed-964b97623a5c-kube-api-access-kmpkq\") pod \"machine-config-operator-74547568cd-v4mz4\" (UID: \"9a967adb-d943-4101-8eed-964b97623a5c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799647 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/69349a4a-2b55-4b42-9baf-bd951db8643a-signing-key\") pod \"service-ca-9c57cc56f-xzj4q\" (UID: \"69349a4a-2b55-4b42-9baf-bd951db8643a\") " pod="openshift-service-ca/service-ca-9c57cc56f-xzj4q" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799673 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq496\" (UniqueName: \"kubernetes.io/projected/d42bddeb-f93a-4603-a38e-1016ca2b3a03-kube-api-access-hq496\") pod \"collect-profiles-29494590-2xwwg\" (UID: \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799701 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-trusted-ca-bundle\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799730 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/381231a0-87e3-4e8d-818a-d126f2a73d5f-node-bootstrap-token\") pod \"machine-config-server-9dn6s\" (UID: \"381231a0-87e3-4e8d-818a-d126f2a73d5f\") " pod="openshift-machine-config-operator/machine-config-server-9dn6s" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799796 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/766285e2-63c4-4073-9b24-d5fbf4b26638-registry-certificates\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.799895 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8538d829-1294-4207-b88d-ae6294983b78-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-l9qrr\" (UID: \"8538d829-1294-4207-b88d-ae6294983b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.800787 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/766285e2-63c4-4073-9b24-d5fbf4b26638-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.851664 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.854219 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/766285e2-63c4-4073-9b24-d5fbf4b26638-registry-certificates\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.854791 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/766285e2-63c4-4073-9b24-d5fbf4b26638-trusted-ca\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.872805 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-registry-tls\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.905656 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/766285e2-63c4-4073-9b24-d5fbf4b26638-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:31 crc kubenswrapper[4895]: E0129 08:43:31.912956 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:32.412936438 +0000 UTC m=+154.054444584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.913188 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.913507 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.913790 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-csi-data-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.913860 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8vqh\" (UniqueName: \"kubernetes.io/projected/3aab1def-9cb4-4b52-b6d5-83b033e82b62-kube-api-access-q8vqh\") pod \"ingress-canary-k2v76\" (UID: \"3aab1def-9cb4-4b52-b6d5-83b033e82b62\") " pod="openshift-ingress-canary/ingress-canary-k2v76" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.913948 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-serving-cert\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.913989 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx8vm\" (UniqueName: \"kubernetes.io/projected/5c48be33-f0d6-4b78-9779-fec2a6244f89-kube-api-access-mx8vm\") pod \"dns-default-852sw\" (UID: \"5c48be33-f0d6-4b78-9779-fec2a6244f89\") " pod="openshift-dns/dns-default-852sw" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.914014 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-registration-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.914808 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-registration-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.957227 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d96d68-0600-4d2c-8f16-3abf329f8483-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lmszx\" (UID: \"98d96d68-0600-4d2c-8f16-3abf329f8483\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.957296 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c48be33-f0d6-4b78-9779-fec2a6244f89-metrics-tls\") pod \"dns-default-852sw\" (UID: \"5c48be33-f0d6-4b78-9779-fec2a6244f89\") " pod="openshift-dns/dns-default-852sw" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.957335 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a967adb-d943-4101-8eed-964b97623a5c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-v4mz4\" (UID: \"9a967adb-d943-4101-8eed-964b97623a5c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.957372 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-socket-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.961488 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-serving-cert\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.990794 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-csi-data-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.993804 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8vqh\" (UniqueName: \"kubernetes.io/projected/3aab1def-9cb4-4b52-b6d5-83b033e82b62-kube-api-access-q8vqh\") pod \"ingress-canary-k2v76\" (UID: \"3aab1def-9cb4-4b52-b6d5-83b033e82b62\") " pod="openshift-ingress-canary/ingress-canary-k2v76" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.996492 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d96d68-0600-4d2c-8f16-3abf329f8483-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lmszx\" (UID: \"98d96d68-0600-4d2c-8f16-3abf329f8483\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.996634 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a967adb-d943-4101-8eed-964b97623a5c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-v4mz4\" (UID: \"9a967adb-d943-4101-8eed-964b97623a5c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.998572 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-socket-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:31 crc kubenswrapper[4895]: I0129 08:43:31.999453 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/381231a0-87e3-4e8d-818a-d126f2a73d5f-certs\") pod \"machine-config-server-9dn6s\" (UID: \"381231a0-87e3-4e8d-818a-d126f2a73d5f\") " pod="openshift-machine-config-operator/machine-config-server-9dn6s" Jan 29 08:43:32 crc kubenswrapper[4895]: E0129 08:43:32.001250 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:32.501227638 +0000 UTC m=+154.142735794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.008368 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/381231a0-87e3-4e8d-818a-d126f2a73d5f-certs\") pod \"machine-config-server-9dn6s\" (UID: \"381231a0-87e3-4e8d-818a-d126f2a73d5f\") " pod="openshift-machine-config-operator/machine-config-server-9dn6s" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.018396 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbxtd\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-kube-api-access-gbxtd\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.019159 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c48be33-f0d6-4b78-9779-fec2a6244f89-metrics-tls\") pod \"dns-default-852sw\" (UID: \"5c48be33-f0d6-4b78-9779-fec2a6244f89\") " pod="openshift-dns/dns-default-852sw" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.022517 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx8vm\" (UniqueName: \"kubernetes.io/projected/5c48be33-f0d6-4b78-9779-fec2a6244f89-kube-api-access-mx8vm\") pod \"dns-default-852sw\" (UID: \"5c48be33-f0d6-4b78-9779-fec2a6244f89\") " pod="openshift-dns/dns-default-852sw" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.041431 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-bound-sa-token\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.048389 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" event={"ID":"d3aeaaf3-e7cc-49d0-9ad8-fcdc5e15a508","Type":"ContainerStarted","Data":"bcfd354d0e91a3d923c5f49dba16734db4cd5d386b64f8043576786662b275d9"} Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.059968 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46" event={"ID":"025c284f-6fab-4bf3-8fba-63f663c2e621","Type":"ContainerStarted","Data":"b2a4452610d457174d45f48b31ad671f6354ca2227f476ca6bfd081f9e903930"} Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.077812 4895 generic.go:334] "Generic (PLEG): container finished" podID="ef82cf5f-56e2-4e0e-9a7f-674337086996" containerID="44925e3d387416d941c6156e20bd85441c2ebbfe8810ae9ded5582ee724002a2" exitCode=0 Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.077899 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" event={"ID":"ef82cf5f-56e2-4e0e-9a7f-674337086996","Type":"ContainerDied","Data":"44925e3d387416d941c6156e20bd85441c2ebbfe8810ae9ded5582ee724002a2"} Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.093033 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" event={"ID":"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb","Type":"ContainerStarted","Data":"7c3d17aedfda0a6c96c32d55676e6e48b4197899586b9e3dd3128247baafc62c"} Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.094107 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.098767 4895 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6696n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.098882 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" podUID="1e7bcbc7-bddb-42f5-915e-d020e66ddeeb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.108751 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f6088a3-2691-4029-a576-2a5abcd3b107-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rmbm8\" (UID: \"8f6088a3-2691-4029-a576-2a5abcd3b107\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.108813 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c437b17-e286-4520-bde1-4e13376343e3-proxy-tls\") pod \"machine-config-controller-84d6567774-jpb5z\" (UID: \"6c437b17-e286-4520-bde1-4e13376343e3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.108857 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-oauth-serving-cert\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.108885 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/69349a4a-2b55-4b42-9baf-bd951db8643a-signing-cabundle\") pod \"service-ca-9c57cc56f-xzj4q\" (UID: \"69349a4a-2b55-4b42-9baf-bd951db8643a\") " pod="openshift-service-ca/service-ca-9c57cc56f-xzj4q" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.108931 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733-webhook-cert\") pod \"packageserver-d55dfcdfc-cchbc\" (UID: \"68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.108956 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3aab1def-9cb4-4b52-b6d5-83b033e82b62-cert\") pod \"ingress-canary-k2v76\" (UID: \"3aab1def-9cb4-4b52-b6d5-83b033e82b62\") " pod="openshift-ingress-canary/ingress-canary-k2v76" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.108986 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f6088a3-2691-4029-a576-2a5abcd3b107-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rmbm8\" (UID: \"8f6088a3-2691-4029-a576-2a5abcd3b107\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109015 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bce819e-baa7-403a-8091-66dbc41187af-config\") pod \"service-ca-operator-777779d784-bhjbb\" (UID: \"9bce819e-baa7-403a-8091-66dbc41187af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109040 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ndgf\" (UniqueName: \"kubernetes.io/projected/8f6088a3-2691-4029-a576-2a5abcd3b107-kube-api-access-4ndgf\") pod \"marketplace-operator-79b997595-rmbm8\" (UID: \"8f6088a3-2691-4029-a576-2a5abcd3b107\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109065 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-plugins-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109095 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph477\" (UniqueName: \"kubernetes.io/projected/d6da3558-9eb3-457a-b1f6-daaaf39ed8ae-kube-api-access-ph477\") pod \"catalog-operator-68c6474976-4fjjn\" (UID: \"d6da3558-9eb3-457a-b1f6-daaaf39ed8ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109134 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2gmw\" (UniqueName: \"kubernetes.io/projected/d6ad1a82-f4da-4591-9622-f1611b8133d9-kube-api-access-c2gmw\") pod \"migrator-59844c95c7-bnrn9\" (UID: \"d6ad1a82-f4da-4591-9622-f1611b8133d9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnrn9" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109169 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ebf4fd3-9460-4809-aa41-7db3a1bc0032-srv-cert\") pod \"olm-operator-6b444d44fb-4zbst\" (UID: \"8ebf4fd3-9460-4809-aa41-7db3a1bc0032\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109197 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d42bddeb-f93a-4603-a38e-1016ca2b3a03-config-volume\") pod \"collect-profiles-29494590-2xwwg\" (UID: \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109231 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f526578-7ac2-4afb-9f56-4ac8c15627f7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-x6gfn\" (UID: \"9f526578-7ac2-4afb-9f56-4ac8c15627f7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109284 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8538d829-1294-4207-b88d-ae6294983b78-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-l9qrr\" (UID: \"8538d829-1294-4207-b88d-ae6294983b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109315 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d96d68-0600-4d2c-8f16-3abf329f8483-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lmszx\" (UID: \"98d96d68-0600-4d2c-8f16-3abf329f8483\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109357 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-mountpoint-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109408 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz9vz\" (UniqueName: \"kubernetes.io/projected/aa8604f8-8033-4126-bbaf-e2648ed77680-kube-api-access-kz9vz\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhx9c\" (UID: \"aa8604f8-8033-4126-bbaf-e2648ed77680\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109437 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a967adb-d943-4101-8eed-964b97623a5c-images\") pod \"machine-config-operator-74547568cd-v4mz4\" (UID: \"9a967adb-d943-4101-8eed-964b97623a5c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109463 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-service-ca\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109494 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa8604f8-8033-4126-bbaf-e2648ed77680-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhx9c\" (UID: \"aa8604f8-8033-4126-bbaf-e2648ed77680\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109537 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-config\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109563 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa8604f8-8033-4126-bbaf-e2648ed77680-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhx9c\" (UID: \"aa8604f8-8033-4126-bbaf-e2648ed77680\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109596 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l94rb\" (UniqueName: \"kubernetes.io/projected/6560d1c1-3b50-45b8-9ed9-d43d0784efba-kube-api-access-l94rb\") pod \"package-server-manager-789f6589d5-c82ws\" (UID: \"6560d1c1-3b50-45b8-9ed9-d43d0784efba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109624 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6v79\" (UniqueName: \"kubernetes.io/projected/9f526578-7ac2-4afb-9f56-4ac8c15627f7-kube-api-access-v6v79\") pod \"openshift-controller-manager-operator-756b6f6bc6-x6gfn\" (UID: \"9f526578-7ac2-4afb-9f56-4ac8c15627f7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109664 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f526578-7ac2-4afb-9f56-4ac8c15627f7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-x6gfn\" (UID: \"9f526578-7ac2-4afb-9f56-4ac8c15627f7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109689 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp5tr\" (UniqueName: \"kubernetes.io/projected/381231a0-87e3-4e8d-818a-d126f2a73d5f-kube-api-access-kp5tr\") pod \"machine-config-server-9dn6s\" (UID: \"381231a0-87e3-4e8d-818a-d126f2a73d5f\") " pod="openshift-machine-config-operator/machine-config-server-9dn6s" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109724 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733-tmpfs\") pod \"packageserver-d55dfcdfc-cchbc\" (UID: \"68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109749 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw74g\" (UniqueName: \"kubernetes.io/projected/68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733-kube-api-access-kw74g\") pod \"packageserver-d55dfcdfc-cchbc\" (UID: \"68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109775 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn2xm\" (UniqueName: \"kubernetes.io/projected/40ec3f19-6989-4c7c-92e2-1d1501a75b24-kube-api-access-gn2xm\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109823 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733-apiservice-cert\") pod \"packageserver-d55dfcdfc-cchbc\" (UID: \"68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109871 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d6da3558-9eb3-457a-b1f6-daaaf39ed8ae-profile-collector-cert\") pod \"catalog-operator-68c6474976-4fjjn\" (UID: \"d6da3558-9eb3-457a-b1f6-daaaf39ed8ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109945 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c48be33-f0d6-4b78-9779-fec2a6244f89-config-volume\") pod \"dns-default-852sw\" (UID: \"5c48be33-f0d6-4b78-9779-fec2a6244f89\") " pod="openshift-dns/dns-default-852sw" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.109978 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmpkq\" (UniqueName: \"kubernetes.io/projected/9a967adb-d943-4101-8eed-964b97623a5c-kube-api-access-kmpkq\") pod \"machine-config-operator-74547568cd-v4mz4\" (UID: \"9a967adb-d943-4101-8eed-964b97623a5c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110012 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq496\" (UniqueName: \"kubernetes.io/projected/d42bddeb-f93a-4603-a38e-1016ca2b3a03-kube-api-access-hq496\") pod \"collect-profiles-29494590-2xwwg\" (UID: \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110040 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/69349a4a-2b55-4b42-9baf-bd951db8643a-signing-key\") pod \"service-ca-9c57cc56f-xzj4q\" (UID: \"69349a4a-2b55-4b42-9baf-bd951db8643a\") " pod="openshift-service-ca/service-ca-9c57cc56f-xzj4q" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110068 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-trusted-ca-bundle\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110096 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/381231a0-87e3-4e8d-818a-d126f2a73d5f-node-bootstrap-token\") pod \"machine-config-server-9dn6s\" (UID: \"381231a0-87e3-4e8d-818a-d126f2a73d5f\") " pod="openshift-machine-config-operator/machine-config-server-9dn6s" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110134 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8538d829-1294-4207-b88d-ae6294983b78-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-l9qrr\" (UID: \"8538d829-1294-4207-b88d-ae6294983b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110211 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r47v2\" (UniqueName: \"kubernetes.io/projected/8ebf4fd3-9460-4809-aa41-7db3a1bc0032-kube-api-access-r47v2\") pod \"olm-operator-6b444d44fb-4zbst\" (UID: \"8ebf4fd3-9460-4809-aa41-7db3a1bc0032\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110242 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a967adb-d943-4101-8eed-964b97623a5c-proxy-tls\") pod \"machine-config-operator-74547568cd-v4mz4\" (UID: \"9a967adb-d943-4101-8eed-964b97623a5c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110285 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdxql\" (UniqueName: \"kubernetes.io/projected/13c23359-7d69-4f3c-b89a-a25bee602474-kube-api-access-wdxql\") pod \"control-plane-machine-set-operator-78cbb6b69f-f5cn6\" (UID: \"13c23359-7d69-4f3c-b89a-a25bee602474\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f5cn6" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110314 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8538d829-1294-4207-b88d-ae6294983b78-config\") pod \"kube-apiserver-operator-766d6c64bb-l9qrr\" (UID: \"8538d829-1294-4207-b88d-ae6294983b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110352 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gst68\" (UniqueName: \"kubernetes.io/projected/9bce819e-baa7-403a-8091-66dbc41187af-kube-api-access-gst68\") pod \"service-ca-operator-777779d784-bhjbb\" (UID: \"9bce819e-baa7-403a-8091-66dbc41187af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110378 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d6da3558-9eb3-457a-b1f6-daaaf39ed8ae-srv-cert\") pod \"catalog-operator-68c6474976-4fjjn\" (UID: \"d6da3558-9eb3-457a-b1f6-daaaf39ed8ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110409 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjvc7\" (UniqueName: \"kubernetes.io/projected/69349a4a-2b55-4b42-9baf-bd951db8643a-kube-api-access-tjvc7\") pod \"service-ca-9c57cc56f-xzj4q\" (UID: \"69349a4a-2b55-4b42-9baf-bd951db8643a\") " pod="openshift-service-ca/service-ca-9c57cc56f-xzj4q" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110446 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110479 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13c23359-7d69-4f3c-b89a-a25bee602474-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f5cn6\" (UID: \"13c23359-7d69-4f3c-b89a-a25bee602474\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f5cn6" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110517 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-oauth-config\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110544 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bce819e-baa7-403a-8091-66dbc41187af-serving-cert\") pod \"service-ca-operator-777779d784-bhjbb\" (UID: \"9bce819e-baa7-403a-8091-66dbc41187af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110570 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ebf4fd3-9460-4809-aa41-7db3a1bc0032-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4zbst\" (UID: \"8ebf4fd3-9460-4809-aa41-7db3a1bc0032\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110599 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d42bddeb-f93a-4603-a38e-1016ca2b3a03-secret-volume\") pod \"collect-profiles-29494590-2xwwg\" (UID: \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110625 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c437b17-e286-4520-bde1-4e13376343e3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jpb5z\" (UID: \"6c437b17-e286-4520-bde1-4e13376343e3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110683 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98d96d68-0600-4d2c-8f16-3abf329f8483-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lmszx\" (UID: \"98d96d68-0600-4d2c-8f16-3abf329f8483\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110714 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48m8g\" (UniqueName: \"kubernetes.io/projected/6c437b17-e286-4520-bde1-4e13376343e3-kube-api-access-48m8g\") pod \"machine-config-controller-84d6567774-jpb5z\" (UID: \"6c437b17-e286-4520-bde1-4e13376343e3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110759 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6560d1c1-3b50-45b8-9ed9-d43d0784efba-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c82ws\" (UID: \"6560d1c1-3b50-45b8-9ed9-d43d0784efba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.110789 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bmk4\" (UniqueName: \"kubernetes.io/projected/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-kube-api-access-6bmk4\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.113360 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f6088a3-2691-4029-a576-2a5abcd3b107-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rmbm8\" (UID: \"8f6088a3-2691-4029-a576-2a5abcd3b107\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.114434 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-service-ca\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.115131 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a967adb-d943-4101-8eed-964b97623a5c-images\") pod \"machine-config-operator-74547568cd-v4mz4\" (UID: \"9a967adb-d943-4101-8eed-964b97623a5c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.115514 4895 generic.go:334] "Generic (PLEG): container finished" podID="0b82afa3-8f94-41e2-828e-4debc9e73088" containerID="81465cb9aed0120faa53fe0b9a5b803ea770547632dbed965b6d33a0af0a6285" exitCode=0 Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.115606 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-klns8" event={"ID":"0b82afa3-8f94-41e2-828e-4debc9e73088","Type":"ContainerDied","Data":"81465cb9aed0120faa53fe0b9a5b803ea770547632dbed965b6d33a0af0a6285"} Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.117105 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bce819e-baa7-403a-8091-66dbc41187af-config\") pod \"service-ca-operator-777779d784-bhjbb\" (UID: \"9bce819e-baa7-403a-8091-66dbc41187af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.117452 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-trusted-ca-bundle\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.118076 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3aab1def-9cb4-4b52-b6d5-83b033e82b62-cert\") pod \"ingress-canary-k2v76\" (UID: \"3aab1def-9cb4-4b52-b6d5-83b033e82b62\") " pod="openshift-ingress-canary/ingress-canary-k2v76" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.119432 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f6088a3-2691-4029-a576-2a5abcd3b107-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rmbm8\" (UID: \"8f6088a3-2691-4029-a576-2a5abcd3b107\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.124903 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa8604f8-8033-4126-bbaf-e2648ed77680-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhx9c\" (UID: \"aa8604f8-8033-4126-bbaf-e2648ed77680\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.126701 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8538d829-1294-4207-b88d-ae6294983b78-config\") pod \"kube-apiserver-operator-766d6c64bb-l9qrr\" (UID: \"8538d829-1294-4207-b88d-ae6294983b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.127828 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733-tmpfs\") pod \"packageserver-d55dfcdfc-cchbc\" (UID: \"68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.127875 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-mountpoint-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.131385 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-oauth-serving-cert\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:32 crc kubenswrapper[4895]: E0129 08:43:32.138477 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:32.638449677 +0000 UTC m=+154.279957823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.139404 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c48be33-f0d6-4b78-9779-fec2a6244f89-config-volume\") pod \"dns-default-852sw\" (UID: \"5c48be33-f0d6-4b78-9779-fec2a6244f89\") " pod="openshift-dns/dns-default-852sw" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.141133 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c437b17-e286-4520-bde1-4e13376343e3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jpb5z\" (UID: \"6c437b17-e286-4520-bde1-4e13376343e3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.141711 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13c23359-7d69-4f3c-b89a-a25bee602474-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f5cn6\" (UID: \"13c23359-7d69-4f3c-b89a-a25bee602474\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f5cn6" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.142499 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-oauth-config\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.142619 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/69349a4a-2b55-4b42-9baf-bd951db8643a-signing-key\") pod \"service-ca-9c57cc56f-xzj4q\" (UID: \"69349a4a-2b55-4b42-9baf-bd951db8643a\") " pod="openshift-service-ca/service-ca-9c57cc56f-xzj4q" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.143543 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/69349a4a-2b55-4b42-9baf-bd951db8643a-signing-cabundle\") pod \"service-ca-9c57cc56f-xzj4q\" (UID: \"69349a4a-2b55-4b42-9baf-bd951db8643a\") " pod="openshift-service-ca/service-ca-9c57cc56f-xzj4q" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.144101 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f526578-7ac2-4afb-9f56-4ac8c15627f7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-x6gfn\" (UID: \"9f526578-7ac2-4afb-9f56-4ac8c15627f7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.144113 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d42bddeb-f93a-4603-a38e-1016ca2b3a03-config-volume\") pod \"collect-profiles-29494590-2xwwg\" (UID: \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.144386 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/40ec3f19-6989-4c7c-92e2-1d1501a75b24-plugins-dir\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.146047 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733-apiservice-cert\") pod \"packageserver-d55dfcdfc-cchbc\" (UID: \"68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.149064 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-config\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.150035 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/381231a0-87e3-4e8d-818a-d126f2a73d5f-node-bootstrap-token\") pod \"machine-config-server-9dn6s\" (UID: \"381231a0-87e3-4e8d-818a-d126f2a73d5f\") " pod="openshift-machine-config-operator/machine-config-server-9dn6s" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.151267 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6560d1c1-3b50-45b8-9ed9-d43d0784efba-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c82ws\" (UID: \"6560d1c1-3b50-45b8-9ed9-d43d0784efba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.164728 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" event={"ID":"26ddacfd-315a-46a3-a9a1-7149df69ef84","Type":"ContainerStarted","Data":"a71989a159a51d3c473da1f43e4e7c8a6d7402c2d6c7380fc15aca43cb2441a4"} Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.165089 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.176008 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" event={"ID":"5203d54b-a735-4118-bae0-7554299a98cf","Type":"ContainerStarted","Data":"6686711f349e59d53ae7f3dfba0e0661671c3813b60829e9fde32c863897b300"} Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.176060 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" event={"ID":"5203d54b-a735-4118-bae0-7554299a98cf","Type":"ContainerStarted","Data":"cb33d2bae61c10efef33319c910cf76f5e14d8ea73dbfc333f145247c492801a"} Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.178594 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d6da3558-9eb3-457a-b1f6-daaaf39ed8ae-profile-collector-cert\") pod \"catalog-operator-68c6474976-4fjjn\" (UID: \"d6da3558-9eb3-457a-b1f6-daaaf39ed8ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.185638 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa8604f8-8033-4126-bbaf-e2648ed77680-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhx9c\" (UID: \"aa8604f8-8033-4126-bbaf-e2648ed77680\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.186458 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8538d829-1294-4207-b88d-ae6294983b78-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-l9qrr\" (UID: \"8538d829-1294-4207-b88d-ae6294983b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.186688 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bce819e-baa7-403a-8091-66dbc41187af-serving-cert\") pod \"service-ca-operator-777779d784-bhjbb\" (UID: \"9bce819e-baa7-403a-8091-66dbc41187af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.186763 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a967adb-d943-4101-8eed-964b97623a5c-proxy-tls\") pod \"machine-config-operator-74547568cd-v4mz4\" (UID: \"9a967adb-d943-4101-8eed-964b97623a5c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.187116 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d96d68-0600-4d2c-8f16-3abf329f8483-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lmszx\" (UID: \"98d96d68-0600-4d2c-8f16-3abf329f8483\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.188632 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f526578-7ac2-4afb-9f56-4ac8c15627f7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-x6gfn\" (UID: \"9f526578-7ac2-4afb-9f56-4ac8c15627f7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.194708 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8538d829-1294-4207-b88d-ae6294983b78-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-l9qrr\" (UID: \"8538d829-1294-4207-b88d-ae6294983b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.197988 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6v79\" (UniqueName: \"kubernetes.io/projected/9f526578-7ac2-4afb-9f56-4ac8c15627f7-kube-api-access-v6v79\") pod \"openshift-controller-manager-operator-756b6f6bc6-x6gfn\" (UID: \"9f526578-7ac2-4afb-9f56-4ac8c15627f7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.205755 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733-webhook-cert\") pod \"packageserver-d55dfcdfc-cchbc\" (UID: \"68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.207788 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" event={"ID":"9395a191-b3a5-4b32-b463-1af135a25807","Type":"ContainerStarted","Data":"46a7078e4ecbc9f9b1533d65e9ae920d59e327628e2927eb3425290c394dda95"} Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.215660 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:32 crc kubenswrapper[4895]: E0129 08:43:32.216397 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:32.716375106 +0000 UTC m=+154.357883252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.216703 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.218226 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:32 crc kubenswrapper[4895]: E0129 08:43:32.219154 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:32.719140681 +0000 UTC m=+154.360648827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.227566 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.228038 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-f64b6" event={"ID":"72fba804-18f2-4fae-addd-49c6b152c262","Type":"ContainerStarted","Data":"60592f893ddd87a42be4c3c4826d5ee6fc369f87129acb5ffd16864b1d6fb37c"} Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.229874 4895 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bc7pv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.229973 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" podUID="26ddacfd-315a-46a3-a9a1-7149df69ef84" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.233185 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw74g\" (UniqueName: \"kubernetes.io/projected/68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733-kube-api-access-kw74g\") pod \"packageserver-d55dfcdfc-cchbc\" (UID: \"68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.241674 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d6da3558-9eb3-457a-b1f6-daaaf39ed8ae-srv-cert\") pod \"catalog-operator-68c6474976-4fjjn\" (UID: \"d6da3558-9eb3-457a-b1f6-daaaf39ed8ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.242218 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ebf4fd3-9460-4809-aa41-7db3a1bc0032-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4zbst\" (UID: \"8ebf4fd3-9460-4809-aa41-7db3a1bc0032\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.242412 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d42bddeb-f93a-4603-a38e-1016ca2b3a03-secret-volume\") pod \"collect-profiles-29494590-2xwwg\" (UID: \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.242902 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c437b17-e286-4520-bde1-4e13376343e3-proxy-tls\") pod \"machine-config-controller-84d6567774-jpb5z\" (UID: \"6c437b17-e286-4520-bde1-4e13376343e3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.246214 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ebf4fd3-9460-4809-aa41-7db3a1bc0032-srv-cert\") pod \"olm-operator-6b444d44fb-4zbst\" (UID: \"8ebf4fd3-9460-4809-aa41-7db3a1bc0032\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.246703 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bmk4\" (UniqueName: \"kubernetes.io/projected/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-kube-api-access-6bmk4\") pod \"console-f9d7485db-z5sff\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.249062 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp5tr\" (UniqueName: \"kubernetes.io/projected/381231a0-87e3-4e8d-818a-d126f2a73d5f-kube-api-access-kp5tr\") pod \"machine-config-server-9dn6s\" (UID: \"381231a0-87e3-4e8d-818a-d126f2a73d5f\") " pod="openshift-machine-config-operator/machine-config-server-9dn6s" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.267563 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.270776 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph477\" (UniqueName: \"kubernetes.io/projected/d6da3558-9eb3-457a-b1f6-daaaf39ed8ae-kube-api-access-ph477\") pod \"catalog-operator-68c6474976-4fjjn\" (UID: \"d6da3558-9eb3-457a-b1f6-daaaf39ed8ae\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.274455 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l94rb\" (UniqueName: \"kubernetes.io/projected/6560d1c1-3b50-45b8-9ed9-d43d0784efba-kube-api-access-l94rb\") pod \"package-server-manager-789f6589d5-c82ws\" (UID: \"6560d1c1-3b50-45b8-9ed9-d43d0784efba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.291508 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bkbvc" event={"ID":"0bfa8be2-ad8a-4253-9406-feb1dbd01a00","Type":"ContainerStarted","Data":"46855486bb21c0e62424fc8cc2c90cabfdc5a3ead327c4a8acafb5848cd46607"} Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.293031 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.293565 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.293603 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-w8vqq" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.293602 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gst68\" (UniqueName: \"kubernetes.io/projected/9bce819e-baa7-403a-8091-66dbc41187af-kube-api-access-gst68\") pod \"service-ca-operator-777779d784-bhjbb\" (UID: \"9bce819e-baa7-403a-8091-66dbc41187af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.312662 4895 patch_prober.go:28] interesting pod/console-operator-58897d9998-bkbvc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.312750 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bkbvc" podUID="0bfa8be2-ad8a-4253-9406-feb1dbd01a00" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.313877 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.319637 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn2xm\" (UniqueName: \"kubernetes.io/projected/40ec3f19-6989-4c7c-92e2-1d1501a75b24-kube-api-access-gn2xm\") pod \"csi-hostpathplugin-njfl7\" (UID: \"40ec3f19-6989-4c7c-92e2-1d1501a75b24\") " pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.320298 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:32 crc kubenswrapper[4895]: E0129 08:43:32.321579 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:32.821559884 +0000 UTC m=+154.463068030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.330722 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48m8g\" (UniqueName: \"kubernetes.io/projected/6c437b17-e286-4520-bde1-4e13376343e3-kube-api-access-48m8g\") pod \"machine-config-controller-84d6567774-jpb5z\" (UID: \"6c437b17-e286-4520-bde1-4e13376343e3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.332782 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v"] Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.350678 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98d96d68-0600-4d2c-8f16-3abf329f8483-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lmszx\" (UID: \"98d96d68-0600-4d2c-8f16-3abf329f8483\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.352202 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.359152 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.359230 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.360437 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gc8mc"] Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.368937 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-852sw" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.380690 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.385048 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdxql\" (UniqueName: \"kubernetes.io/projected/13c23359-7d69-4f3c-b89a-a25bee602474-kube-api-access-wdxql\") pod \"control-plane-machine-set-operator-78cbb6b69f-f5cn6\" (UID: \"13c23359-7d69-4f3c-b89a-a25bee602474\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f5cn6" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.399681 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq496\" (UniqueName: \"kubernetes.io/projected/d42bddeb-f93a-4603-a38e-1016ca2b3a03-kube-api-access-hq496\") pod \"collect-profiles-29494590-2xwwg\" (UID: \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.400837 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjvc7\" (UniqueName: \"kubernetes.io/projected/69349a4a-2b55-4b42-9baf-bd951db8643a-kube-api-access-tjvc7\") pod \"service-ca-9c57cc56f-xzj4q\" (UID: \"69349a4a-2b55-4b42-9baf-bd951db8643a\") " pod="openshift-service-ca/service-ca-9c57cc56f-xzj4q" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.420223 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-njfl7" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.422092 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r47v2\" (UniqueName: \"kubernetes.io/projected/8ebf4fd3-9460-4809-aa41-7db3a1bc0032-kube-api-access-r47v2\") pod \"olm-operator-6b444d44fb-4zbst\" (UID: \"8ebf4fd3-9460-4809-aa41-7db3a1bc0032\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.422872 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.423000 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x"] Jan 29 08:43:32 crc kubenswrapper[4895]: E0129 08:43:32.429741 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:32.929717633 +0000 UTC m=+154.571225969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.430072 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz"] Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.432859 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-k2v76" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.438741 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9mccd"] Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.439712 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9dn6s" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.450771 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmpkq\" (UniqueName: \"kubernetes.io/projected/9a967adb-d943-4101-8eed-964b97623a5c-kube-api-access-kmpkq\") pod \"machine-config-operator-74547568cd-v4mz4\" (UID: \"9a967adb-d943-4101-8eed-964b97623a5c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.462602 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz9vz\" (UniqueName: \"kubernetes.io/projected/aa8604f8-8033-4126-bbaf-e2648ed77680-kube-api-access-kz9vz\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhx9c\" (UID: \"aa8604f8-8033-4126-bbaf-e2648ed77680\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.505407 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.513442 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ndgf\" (UniqueName: \"kubernetes.io/projected/8f6088a3-2691-4029-a576-2a5abcd3b107-kube-api-access-4ndgf\") pod \"marketplace-operator-79b997595-rmbm8\" (UID: \"8f6088a3-2691-4029-a576-2a5abcd3b107\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.525836 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:32 crc kubenswrapper[4895]: E0129 08:43:32.526443 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:33.026406562 +0000 UTC m=+154.667914728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.534055 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2gmw\" (UniqueName: \"kubernetes.io/projected/d6ad1a82-f4da-4591-9622-f1611b8133d9-kube-api-access-c2gmw\") pod \"migrator-59844c95c7-bnrn9\" (UID: \"d6ad1a82-f4da-4591-9622-f1611b8133d9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnrn9" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.535799 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.548501 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f5cn6" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.551362 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnrn9" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.581364 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.591807 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-xzj4q" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.608818 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.609204 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.628483 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:32 crc kubenswrapper[4895]: E0129 08:43:32.628987 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:33.128967959 +0000 UTC m=+154.770476105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.629596 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.639988 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.657573 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.733492 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:32 crc kubenswrapper[4895]: E0129 08:43:32.735097 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:33.235066853 +0000 UTC m=+154.876575009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.737587 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5"] Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.753875 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw"] Jan 29 08:43:32 crc kubenswrapper[4895]: W0129 08:43:32.762652 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod486af04c_0ffa_435d_8a8e_4867f8c0143e.slice/crio-415b835f3e42bc1ade7344ff1a5516fdf268df8986c27ad309bff65cc0e3d6ba WatchSource:0}: Error finding container 415b835f3e42bc1ade7344ff1a5516fdf268df8986c27ad309bff65cc0e3d6ba: Status 404 returned error can't find the container with id 415b835f3e42bc1ade7344ff1a5516fdf268df8986c27ad309bff65cc0e3d6ba Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.764285 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.764395 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.838593 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:32 crc kubenswrapper[4895]: E0129 08:43:32.839059 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:33.339042129 +0000 UTC m=+154.980550275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:32 crc kubenswrapper[4895]: I0129 08:43:32.940112 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:32 crc kubenswrapper[4895]: E0129 08:43:32.940597 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:33.440574088 +0000 UTC m=+155.082082234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.041413 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc"] Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.042375 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:33 crc kubenswrapper[4895]: E0129 08:43:33.043245 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:33.543229728 +0000 UTC m=+155.184737874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.098510 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5fc8p"] Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.145523 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:33 crc kubenswrapper[4895]: E0129 08:43:33.146115 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:33.646089923 +0000 UTC m=+155.287598069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.247825 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:33 crc kubenswrapper[4895]: E0129 08:43:33.248562 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:33.748521687 +0000 UTC m=+155.390029833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.362704 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:33 crc kubenswrapper[4895]: E0129 08:43:33.363488 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:33.863459901 +0000 UTC m=+155.504968047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.363820 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x" event={"ID":"148b39e9-bdef-40c7-a6e1-eb1e922710f5","Type":"ContainerStarted","Data":"1b6ff49f99aad28d0fecf5d1e019e7fb39e23d603044fa755a69df7d81be2202"} Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.392098 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-f64b6" event={"ID":"72fba804-18f2-4fae-addd-49c6b152c262","Type":"ContainerStarted","Data":"b2fc74a9c88175fe7bfa4ee3879420c6c6bcd0f368787af183b6ecd2ed030abd"} Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.392826 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rhk9v" podStartSLOduration=129.392790808 podStartE2EDuration="2m9.392790808s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:33.392061879 +0000 UTC m=+155.033570035" watchObservedRunningTime="2026-01-29 08:43:33.392790808 +0000 UTC m=+155.034298954" Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.419244 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" event={"ID":"4458bb2e-c5b0-4553-a59c-f9ede889b5f5","Type":"ContainerStarted","Data":"811d5063324e5675fb87ba00bc642fe3b1507f1dc6a3d74627134f480751e3ca"} Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.430096 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9dn6s" event={"ID":"381231a0-87e3-4e8d-818a-d126f2a73d5f","Type":"ContainerStarted","Data":"e8470ec60c0eb23f5d918ea0446298416d641505ece919245f2c8155480a3d33"} Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.436185 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gc8mc" event={"ID":"23e4847e-5b39-4a3b-aada-5d8c28c162e8","Type":"ContainerStarted","Data":"1d95d293340888f8b991e3c9980ad863d8b7306676d64f82667ebbb05cb42c2e"} Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.450683 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" event={"ID":"9395a191-b3a5-4b32-b463-1af135a25807","Type":"ContainerStarted","Data":"d5241e32e21c37ae41787444fecc53c6c91b23966d0c125949060c48dec241c8"} Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.456658 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" event={"ID":"68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733","Type":"ContainerStarted","Data":"6c5651896d261f0f58234598102531d3808d79e48b559e81005baba2069c0856"} Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.461185 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mccd" event={"ID":"486af04c-0ffa-435d-8a8e-4867f8c0143e","Type":"ContainerStarted","Data":"415b835f3e42bc1ade7344ff1a5516fdf268df8986c27ad309bff65cc0e3d6ba"} Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.470580 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:33 crc kubenswrapper[4895]: E0129 08:43:33.472012 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:33.971992801 +0000 UTC m=+155.613500947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.530560 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-klns8" event={"ID":"0b82afa3-8f94-41e2-828e-4debc9e73088","Type":"ContainerStarted","Data":"70b76b23979b98338b8d4ba7c88c07f6eb34688a071238b931affd721c0a74bb"} Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.545797 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" event={"ID":"ef82cf5f-56e2-4e0e-9a7f-674337086996","Type":"ContainerStarted","Data":"dd138b2e6c388400dd990970c6c4b9a79067b80719e96b3d2bf91ec88d3a8d60"} Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.550794 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" event={"ID":"8bf5c6f5-7b1f-4bf7-9d47-b181232c1107","Type":"ContainerStarted","Data":"77b63a1bfe5f39673f6b31807c9a9abb0c6ab533401661d1e166241460e4dd67"} Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.556299 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" event={"ID":"a2fd7a59-87f6-4ae1-9d60-646916752cef","Type":"ContainerStarted","Data":"f7e212ccdb2072d01a2f41482ccda69c5307bb09162ad680abb6b3ba9c5424ae"} Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.560759 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5" event={"ID":"2ee8a736-336a-4b9a-a5d3-df1d4da6da62","Type":"ContainerStarted","Data":"700f60e87eeacb51215f8cf8606872825feab93bf42d9a625388875f9d4249e9"} Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.563484 4895 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6696n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.563534 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" podUID="1e7bcbc7-bddb-42f5-915e-d020e66ddeeb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.571522 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:33 crc kubenswrapper[4895]: E0129 08:43:33.573651 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:34.073625723 +0000 UTC m=+155.715133879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.574280 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.574386 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.574400 4895 patch_prober.go:28] interesting pod/console-operator-58897d9998-bkbvc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.574487 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bkbvc" podUID="0bfa8be2-ad8a-4253-9406-feb1dbd01a00" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.698231 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:33 crc kubenswrapper[4895]: E0129 08:43:33.701857 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:34.201838228 +0000 UTC m=+155.843346374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.809166 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:33 crc kubenswrapper[4895]: E0129 08:43:33.809499 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:34.309481183 +0000 UTC m=+155.950989329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.882411 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:33 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:33 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:33 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.882489 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.959190 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:33 crc kubenswrapper[4895]: E0129 08:43:33.959620 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:34.459603713 +0000 UTC m=+156.101111859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:33 crc kubenswrapper[4895]: I0129 08:43:33.974282 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" podStartSLOduration=128.974247991 podStartE2EDuration="2m8.974247991s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:33.967546799 +0000 UTC m=+155.609054955" watchObservedRunningTime="2026-01-29 08:43:33.974247991 +0000 UTC m=+155.615756137" Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.067514 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:34 crc kubenswrapper[4895]: E0129 08:43:34.068001 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:34.567983619 +0000 UTC m=+156.209491755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.072610 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.107228 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-f64b6" podStartSLOduration=129.107189405 podStartE2EDuration="2m9.107189405s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:34.027527629 +0000 UTC m=+155.669035795" watchObservedRunningTime="2026-01-29 08:43:34.107189405 +0000 UTC m=+155.748697551" Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.146416 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-w8vqq" podStartSLOduration=129.14638338 podStartE2EDuration="2m9.14638338s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:34.141949239 +0000 UTC m=+155.783457385" watchObservedRunningTime="2026-01-29 08:43:34.14638338 +0000 UTC m=+155.787891526" Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.170937 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:34 crc kubenswrapper[4895]: E0129 08:43:34.171439 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:34.671422149 +0000 UTC m=+156.312930295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.196624 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn"] Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.253615 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn"] Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.271719 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:34 crc kubenswrapper[4895]: E0129 08:43:34.272413 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:34.772391735 +0000 UTC m=+156.413899881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.305841 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" podStartSLOduration=129.305808642 podStartE2EDuration="2m9.305808642s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:34.217638266 +0000 UTC m=+155.859146412" watchObservedRunningTime="2026-01-29 08:43:34.305808642 +0000 UTC m=+155.947316788" Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.315740 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n8q46" podStartSLOduration=130.315717931 podStartE2EDuration="2m10.315717931s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:34.266619577 +0000 UTC m=+155.908127743" watchObservedRunningTime="2026-01-29 08:43:34.315717931 +0000 UTC m=+155.957226077" Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.319641 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws"] Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.349734 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-z5sff"] Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.350227 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xc6q5" podStartSLOduration=129.350194278 podStartE2EDuration="2m9.350194278s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:34.326589447 +0000 UTC m=+155.968097593" watchObservedRunningTime="2026-01-29 08:43:34.350194278 +0000 UTC m=+155.991702424" Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.375662 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:34 crc kubenswrapper[4895]: E0129 08:43:34.376116 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:34.876099563 +0000 UTC m=+156.517607709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.398229 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bkbvc" podStartSLOduration=129.398205103 podStartE2EDuration="2m9.398205103s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:34.394911574 +0000 UTC m=+156.036419720" watchObservedRunningTime="2026-01-29 08:43:34.398205103 +0000 UTC m=+156.039713249" Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.411882 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr"] Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.425833 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-852sw"] Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.477155 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:34 crc kubenswrapper[4895]: E0129 08:43:34.477645 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:34.977623232 +0000 UTC m=+156.619131368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.484135 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" podStartSLOduration=129.484112048 podStartE2EDuration="2m9.484112048s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:34.478516676 +0000 UTC m=+156.120024822" watchObservedRunningTime="2026-01-29 08:43:34.484112048 +0000 UTC m=+156.125620194" Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.568663 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zd4f9" podStartSLOduration=129.568606155 podStartE2EDuration="2m9.568606155s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:34.53310026 +0000 UTC m=+156.174608406" watchObservedRunningTime="2026-01-29 08:43:34.568606155 +0000 UTC m=+156.210114301" Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.570762 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zdksh" podStartSLOduration=130.570754544 podStartE2EDuration="2m10.570754544s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:34.567880226 +0000 UTC m=+156.209388392" watchObservedRunningTime="2026-01-29 08:43:34.570754544 +0000 UTC m=+156.212262690" Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.579339 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:34 crc kubenswrapper[4895]: E0129 08:43:34.580041 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:35.079997134 +0000 UTC m=+156.721505280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.621044 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9dn6s" event={"ID":"381231a0-87e3-4e8d-818a-d126f2a73d5f","Type":"ContainerStarted","Data":"8e9e104f7af0d95ce0e39d624b14657775cc582b9546e873254bb3969c52e5f1"} Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.700702 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:34 crc kubenswrapper[4895]: E0129 08:43:34.701763 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:35.201740753 +0000 UTC m=+156.843248899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:34 crc kubenswrapper[4895]: W0129 08:43:34.705442 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6da3558_9eb3_457a_b1f6_daaaf39ed8ae.slice/crio-3019f55df1fbde8f80df2e5a4accb449160b64fea9e213bb3f1eb097673a9dda WatchSource:0}: Error finding container 3019f55df1fbde8f80df2e5a4accb449160b64fea9e213bb3f1eb097673a9dda: Status 404 returned error can't find the container with id 3019f55df1fbde8f80df2e5a4accb449160b64fea9e213bb3f1eb097673a9dda Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.706741 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9dn6s" podStartSLOduration=5.7067192890000005 podStartE2EDuration="5.706719289s" podCreationTimestamp="2026-01-29 08:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:34.704579 +0000 UTC m=+156.346087156" watchObservedRunningTime="2026-01-29 08:43:34.706719289 +0000 UTC m=+156.348227435" Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.788620 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" event={"ID":"4458bb2e-c5b0-4553-a59c-f9ede889b5f5","Type":"ContainerStarted","Data":"0693e492caaf9c0374b0c09c311d32016c86e42a85ef3af1b2b2540dbc1d7efb"} Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.799314 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:34 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:34 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:34 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.799405 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.803776 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:34 crc kubenswrapper[4895]: E0129 08:43:34.805603 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:35.305587185 +0000 UTC m=+156.947095331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.858365 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" event={"ID":"a2fd7a59-87f6-4ae1-9d60-646916752cef","Type":"ContainerStarted","Data":"6c30a4b64a266109df2c8ffd7587546a8a3127d066b979ad6f41ad3faec9b37e"} Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.870269 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4cwvw" podStartSLOduration=129.870249184 podStartE2EDuration="2m9.870249184s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:34.869598765 +0000 UTC m=+156.511106921" watchObservedRunningTime="2026-01-29 08:43:34.870249184 +0000 UTC m=+156.511757330" Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.918101 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:34 crc kubenswrapper[4895]: E0129 08:43:34.918788 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:35.418760531 +0000 UTC m=+157.060268677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.939456 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" event={"ID":"8bf5c6f5-7b1f-4bf7-9d47-b181232c1107","Type":"ContainerStarted","Data":"6500c7197c4a08b08b4a344e1ed42cb2a096c310ea67693bb3b6d9b0964e3d89"} Jan 29 08:43:34 crc kubenswrapper[4895]: I0129 08:43:34.957536 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" event={"ID":"e6030804-d717-42c9-b2b2-8eaaadaddca0","Type":"ContainerStarted","Data":"00a0b2a8cf5d55eb4278ec6d48fafed44276b8ca0613398f5f3165e716b59d56"} Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.024997 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:35 crc kubenswrapper[4895]: E0129 08:43:35.064134 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:35.564112352 +0000 UTC m=+157.205620498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.068462 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.151344 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.151973 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.152716 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:35 crc kubenswrapper[4895]: E0129 08:43:35.153169 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:35.653142302 +0000 UTC m=+157.294650618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.219387 4895 csr.go:261] certificate signing request csr-lc5gj is approved, waiting to be issued Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.230009 4895 csr.go:257] certificate signing request csr-lc5gj is issued Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.255962 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:35 crc kubenswrapper[4895]: E0129 08:43:35.257092 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:35.757070566 +0000 UTC m=+157.398578712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.282787 4895 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-8dlcj container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.282840 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" podUID="ef82cf5f-56e2-4e0e-9a7f-674337086996" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.364781 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:35 crc kubenswrapper[4895]: E0129 08:43:35.366548 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:35.86649742 +0000 UTC m=+157.508005566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.368668 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:35 crc kubenswrapper[4895]: E0129 08:43:35.369443 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:35.869271136 +0000 UTC m=+157.510779282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.409002 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-k2v76"] Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.421171 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bkbvc" Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.448068 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb"] Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.470660 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:35 crc kubenswrapper[4895]: E0129 08:43:35.471244 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:35.971216477 +0000 UTC m=+157.612724613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.486893 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-xzj4q"] Jan 29 08:43:35 crc kubenswrapper[4895]: W0129 08:43:35.598374 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69349a4a_2b55_4b42_9baf_bd951db8643a.slice/crio-413c9e6fcdb31d1f6490b6662860b8890c05e996f677e3b158207b9dc5d4c6f0 WatchSource:0}: Error finding container 413c9e6fcdb31d1f6490b6662860b8890c05e996f677e3b158207b9dc5d4c6f0: Status 404 returned error can't find the container with id 413c9e6fcdb31d1f6490b6662860b8890c05e996f677e3b158207b9dc5d4c6f0 Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.599937 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:35 crc kubenswrapper[4895]: E0129 08:43:35.600425 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:36.100411878 +0000 UTC m=+157.741920024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.618870 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:35 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:35 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:35 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.618943 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.670547 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f5cn6"] Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.673255 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-njfl7"] Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.702597 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:35 crc kubenswrapper[4895]: E0129 08:43:35.702869 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:36.202826681 +0000 UTC m=+157.844334827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.703559 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:35 crc kubenswrapper[4895]: E0129 08:43:35.704081 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:36.204060124 +0000 UTC m=+157.845568270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.806780 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:35 crc kubenswrapper[4895]: E0129 08:43:35.807311 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:36.30728245 +0000 UTC m=+157.948790596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:35 crc kubenswrapper[4895]: W0129 08:43:35.904413 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40ec3f19_6989_4c7c_92e2_1d1501a75b24.slice/crio-5a2f632f92a5196b5026547b901dce60dcf459cc259ee5ee11e5909e1617f8cf WatchSource:0}: Error finding container 5a2f632f92a5196b5026547b901dce60dcf459cc259ee5ee11e5909e1617f8cf: Status 404 returned error can't find the container with id 5a2f632f92a5196b5026547b901dce60dcf459cc259ee5ee11e5909e1617f8cf Jan 29 08:43:35 crc kubenswrapper[4895]: I0129 08:43:35.909603 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:35 crc kubenswrapper[4895]: E0129 08:43:35.910417 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:36.410398872 +0000 UTC m=+158.051907018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.010783 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:36 crc kubenswrapper[4895]: E0129 08:43:36.011327 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:36.511300685 +0000 UTC m=+158.152808841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.011363 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z"] Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.033052 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst"] Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.051246 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg"] Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.098279 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c"] Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.117432 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z5sff" event={"ID":"ea9f8a45-3fdc-4780-a008-e0f77c99dffc","Type":"ContainerStarted","Data":"02d6a49f736b9f51cfbdc262c2f156da9cf9aab6f19e27a7056984ec562ba1af"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.118363 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:36 crc kubenswrapper[4895]: E0129 08:43:36.118853 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:36.618836317 +0000 UTC m=+158.260344463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.124271 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bnrn9"] Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.150992 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rmbm8"] Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.154355 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mccd" event={"ID":"486af04c-0ffa-435d-8a8e-4867f8c0143e","Type":"ContainerStarted","Data":"e81b80565d88c0bdb87aba680917f6c5682455350823fe27040dcde7e11e9994"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.168480 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx"] Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.178714 4895 generic.go:334] "Generic (PLEG): container finished" podID="8bf5c6f5-7b1f-4bf7-9d47-b181232c1107" containerID="6500c7197c4a08b08b4a344e1ed42cb2a096c310ea67693bb3b6d9b0964e3d89" exitCode=0 Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.179179 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" event={"ID":"8bf5c6f5-7b1f-4bf7-9d47-b181232c1107","Type":"ContainerDied","Data":"6500c7197c4a08b08b4a344e1ed42cb2a096c310ea67693bb3b6d9b0964e3d89"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.189618 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws" event={"ID":"6560d1c1-3b50-45b8-9ed9-d43d0784efba","Type":"ContainerStarted","Data":"e2a6208d2e091f9d273b4707d565fce2a9d4efee64338aff07972ed9fa447236"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.189673 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws" event={"ID":"6560d1c1-3b50-45b8-9ed9-d43d0784efba","Type":"ContainerStarted","Data":"c65ae7737c76c2cad437aa3e9251b438e46641f2c0ba7892c593203acffb4edc"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.220423 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:36 crc kubenswrapper[4895]: E0129 08:43:36.221225 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:36.721207849 +0000 UTC m=+158.362715995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.232604 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 08:38:35 +0000 UTC, rotation deadline is 2026-12-15 01:22:42.254918795 +0000 UTC Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.232692 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7672h39m6.022230483s for next certificate rotation Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.233150 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-xzj4q" event={"ID":"69349a4a-2b55-4b42-9baf-bd951db8643a","Type":"ContainerStarted","Data":"413c9e6fcdb31d1f6490b6662860b8890c05e996f677e3b158207b9dc5d4c6f0"} Jan 29 08:43:36 crc kubenswrapper[4895]: W0129 08:43:36.237129 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa8604f8_8033_4126_bbaf_e2648ed77680.slice/crio-057e5c381aaf6efd8bd38f47e896279e7f635a5a72cb48fd2f3b866db0699e7e WatchSource:0}: Error finding container 057e5c381aaf6efd8bd38f47e896279e7f635a5a72cb48fd2f3b866db0699e7e: Status 404 returned error can't find the container with id 057e5c381aaf6efd8bd38f47e896279e7f635a5a72cb48fd2f3b866db0699e7e Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.248559 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njfl7" event={"ID":"40ec3f19-6989-4c7c-92e2-1d1501a75b24","Type":"ContainerStarted","Data":"5a2f632f92a5196b5026547b901dce60dcf459cc259ee5ee11e5909e1617f8cf"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.267654 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f5cn6" event={"ID":"13c23359-7d69-4f3c-b89a-a25bee602474","Type":"ContainerStarted","Data":"e8f3382934a192dced4e59cb4c1762bca43ea8267ff04f5b7a94953d397578a2"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.269364 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4"] Jan 29 08:43:36 crc kubenswrapper[4895]: W0129 08:43:36.310728 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f6088a3_2691_4029_a576_2a5abcd3b107.slice/crio-2a567cb0889e34de37aecc6201de725a331be68760ceedf1425b6a1ba14924e9 WatchSource:0}: Error finding container 2a567cb0889e34de37aecc6201de725a331be68760ceedf1425b6a1ba14924e9: Status 404 returned error can't find the container with id 2a567cb0889e34de37aecc6201de725a331be68760ceedf1425b6a1ba14924e9 Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.317103 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-k2v76" event={"ID":"3aab1def-9cb4-4b52-b6d5-83b033e82b62","Type":"ContainerStarted","Data":"5a98d1474c2593ce2e6ae6e3ad63387aaa649619fdf81a5f5f83841e35a5ea06"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.323387 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:36 crc kubenswrapper[4895]: E0129 08:43:36.324857 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:36.824840116 +0000 UTC m=+158.466348262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.331811 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x" event={"ID":"148b39e9-bdef-40c7-a6e1-eb1e922710f5","Type":"ContainerStarted","Data":"587b80927e2ae0af1c39a932bb0e597cdb79f4a55db757d8e79c6ff5c5c67132"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.342550 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr" event={"ID":"8538d829-1294-4207-b88d-ae6294983b78","Type":"ContainerStarted","Data":"8f25983ebb562f5c8bbef8e005df9adca51135fb3da0f5eefe598329a45fe075"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.364653 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xr69x" podStartSLOduration=131.364633778 podStartE2EDuration="2m11.364633778s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:36.363953619 +0000 UTC m=+158.005461765" watchObservedRunningTime="2026-01-29 08:43:36.364633778 +0000 UTC m=+158.006141944" Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.394430 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-klns8" event={"ID":"0b82afa3-8f94-41e2-828e-4debc9e73088","Type":"ContainerStarted","Data":"35656727ff25fc6eac8955be2c1e2f57573fda46e34ed9425a3aa0bc871540fb"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.417088 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" event={"ID":"d6da3558-9eb3-457a-b1f6-daaaf39ed8ae","Type":"ContainerStarted","Data":"7f291b38e2b4075d5d1ebe720170de24ef31a1b65cd77217b6d6d92b63e58840"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.417156 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" event={"ID":"d6da3558-9eb3-457a-b1f6-daaaf39ed8ae","Type":"ContainerStarted","Data":"3019f55df1fbde8f80df2e5a4accb449160b64fea9e213bb3f1eb097673a9dda"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.418152 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.423209 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" event={"ID":"a2fd7a59-87f6-4ae1-9d60-646916752cef","Type":"ContainerStarted","Data":"a4bba032a8e9680abd5c2b50d3db1fba705adeb9926a72d9666b5550cc6ab279"} Jan 29 08:43:36 crc kubenswrapper[4895]: E0129 08:43:36.425018 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:36.924991088 +0000 UTC m=+158.566499234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.426501 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.426903 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:36 crc kubenswrapper[4895]: E0129 08:43:36.427404 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:36.927385303 +0000 UTC m=+158.568893449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.434338 4895 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4fjjn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.434429 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" podUID="d6da3558-9eb3-457a-b1f6-daaaf39ed8ae" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.458045 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" event={"ID":"e6030804-d717-42c9-b2b2-8eaaadaddca0","Type":"ContainerStarted","Data":"836a52176201c465f32ab5d24d654f439b5d07ec1f3af7069997addb260a0041"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.459375 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.460936 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-klns8" podStartSLOduration=132.460888884 podStartE2EDuration="2m12.460888884s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:36.460356989 +0000 UTC m=+158.101865155" watchObservedRunningTime="2026-01-29 08:43:36.460888884 +0000 UTC m=+158.102397030" Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.474559 4895 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5fc8p container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.474636 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" podUID="e6030804-d717-42c9-b2b2-8eaaadaddca0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.475255 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-852sw" event={"ID":"5c48be33-f0d6-4b78-9779-fec2a6244f89","Type":"ContainerStarted","Data":"5ef8d674d1eb14b7ac38f7f0e1edbed4611ce62dc212ee66deda10d1ddcdd36a"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.483361 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" event={"ID":"68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733","Type":"ContainerStarted","Data":"e453ea8a5543caffaaa17b59bc26378a358e2acb927370ab6a4321cccd59347c"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.484481 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.496029 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gc8mc" event={"ID":"23e4847e-5b39-4a3b-aada-5d8c28c162e8","Type":"ContainerStarted","Data":"8e95c9f0d6c62bbddb185c997af8df97ad0e877a3a4f6f66cc8f4494aa395dbd"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.496277 4895 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-cchbc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.496315 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" podUID="68b9b0e4-0dd7-4c85-b8fa-1dbbdae90733" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.497554 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" podStartSLOduration=131.497518529 podStartE2EDuration="2m11.497518529s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:36.496741578 +0000 UTC m=+158.138249724" watchObservedRunningTime="2026-01-29 08:43:36.497518529 +0000 UTC m=+158.139026675" Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.518281 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5" event={"ID":"2ee8a736-336a-4b9a-a5d3-df1d4da6da62","Type":"ContainerStarted","Data":"0095d557b6c0944628757dba1df78d6cebc2631de6732cb30c99d33f0f8cf55f"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.529443 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb" event={"ID":"9bce819e-baa7-403a-8091-66dbc41187af","Type":"ContainerStarted","Data":"37f4b7a72feb85600bf26848317e0ebacd05ee2f5b0d0c41373615b56c60be7d"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.532613 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:36 crc kubenswrapper[4895]: E0129 08:43:36.534277 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:37.034240657 +0000 UTC m=+158.675748993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.548858 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn" event={"ID":"9f526578-7ac2-4afb-9f56-4ac8c15627f7","Type":"ContainerStarted","Data":"0114aa44fe0b64cdfd592200af208b36af71482f93629bdf0b8c58b9013815ed"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.548895 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn" event={"ID":"9f526578-7ac2-4afb-9f56-4ac8c15627f7","Type":"ContainerStarted","Data":"3521133f4a36a193dc5e0e9a7210c86a6da4303cae2800b71796b2bde132f7d3"} Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.576376 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" podStartSLOduration=132.576353372 podStartE2EDuration="2m12.576353372s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:36.533215969 +0000 UTC m=+158.174724115" watchObservedRunningTime="2026-01-29 08:43:36.576353372 +0000 UTC m=+158.217861518" Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.578563 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2rhjz" podStartSLOduration=131.578554372 podStartE2EDuration="2m11.578554372s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:36.575506118 +0000 UTC m=+158.217014264" watchObservedRunningTime="2026-01-29 08:43:36.578554372 +0000 UTC m=+158.220062518" Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.619983 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:36 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:36 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:36 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.620061 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.659165 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:36 crc kubenswrapper[4895]: E0129 08:43:36.679898 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:37.179876425 +0000 UTC m=+158.821384781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.696999 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x6gfn" podStartSLOduration=131.69697004 podStartE2EDuration="2m11.69697004s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:36.68776562 +0000 UTC m=+158.329273766" watchObservedRunningTime="2026-01-29 08:43:36.69697004 +0000 UTC m=+158.338478186" Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.761028 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:36 crc kubenswrapper[4895]: E0129 08:43:36.761454 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:37.261429341 +0000 UTC m=+158.902937487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.784035 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" podStartSLOduration=131.783989404 podStartE2EDuration="2m11.783989404s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:36.781354393 +0000 UTC m=+158.422862539" watchObservedRunningTime="2026-01-29 08:43:36.783989404 +0000 UTC m=+158.425497550" Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.875553 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:36 crc kubenswrapper[4895]: E0129 08:43:36.876057 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:37.376038046 +0000 UTC m=+159.017546202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:36 crc kubenswrapper[4895]: I0129 08:43:36.978747 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:36 crc kubenswrapper[4895]: E0129 08:43:36.979166 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:37.479134999 +0000 UTC m=+159.120643165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.084497 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:37 crc kubenswrapper[4895]: E0129 08:43:37.084896 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:37.584882332 +0000 UTC m=+159.226390478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.185768 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:37 crc kubenswrapper[4895]: E0129 08:43:37.185994 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:37.685953009 +0000 UTC m=+159.327461155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.187116 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:37 crc kubenswrapper[4895]: E0129 08:43:37.187573 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:37.687562453 +0000 UTC m=+159.329070789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.291520 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:37 crc kubenswrapper[4895]: E0129 08:43:37.291794 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:37.791747855 +0000 UTC m=+159.433256001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.291846 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:37 crc kubenswrapper[4895]: E0129 08:43:37.292180 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:37.792166846 +0000 UTC m=+159.433674992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.392660 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:37 crc kubenswrapper[4895]: E0129 08:43:37.393316 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:37.893284005 +0000 UTC m=+159.534792151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.497629 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:37 crc kubenswrapper[4895]: E0129 08:43:37.498107 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:37.998090342 +0000 UTC m=+159.639598488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.601707 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:37 crc kubenswrapper[4895]: E0129 08:43:37.602591 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:38.102573482 +0000 UTC m=+159.744081618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.645227 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:37 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:37 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:37 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.645312 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.703376 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:37 crc kubenswrapper[4895]: E0129 08:43:37.703761 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:38.203747392 +0000 UTC m=+159.845255538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.728099 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mccd" event={"ID":"486af04c-0ffa-435d-8a8e-4867f8c0143e","Type":"ContainerStarted","Data":"58ab4388fda54a08d5076e4b3e02c98cdb307c35f0514977d6b6905127303dac"} Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.739065 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" event={"ID":"8f6088a3-2691-4029-a576-2a5abcd3b107","Type":"ContainerStarted","Data":"3d32b1abbfba61aab5e4afb6d7b933ae7af1b19d8344190dde7ed4808d2bdf80"} Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.739133 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.739157 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" event={"ID":"8f6088a3-2691-4029-a576-2a5abcd3b107","Type":"ContainerStarted","Data":"2a567cb0889e34de37aecc6201de725a331be68760ceedf1425b6a1ba14924e9"} Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.747450 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-xzj4q" event={"ID":"69349a4a-2b55-4b42-9baf-bd951db8643a","Type":"ContainerStarted","Data":"a671d4b6108a32c34189740d39799e5eecd48f3a4444216777c260aa79bee0ae"} Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.751410 4895 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rmbm8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.751493 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" podUID="8f6088a3-2691-4029-a576-2a5abcd3b107" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.760762 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mccd" podStartSLOduration=132.760743721 podStartE2EDuration="2m12.760743721s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:37.756347182 +0000 UTC m=+159.397855328" watchObservedRunningTime="2026-01-29 08:43:37.760743721 +0000 UTC m=+159.402251867" Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.796000 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" event={"ID":"d42bddeb-f93a-4603-a38e-1016ca2b3a03","Type":"ContainerStarted","Data":"f92fd92a2be0b4c2e85cd016c516f87e92da5ca8cd0a99fa1f241b628fabd679"} Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.805614 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:37 crc kubenswrapper[4895]: E0129 08:43:37.807256 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:38.307235824 +0000 UTC m=+159.948743970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.822304 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr" event={"ID":"8538d829-1294-4207-b88d-ae6294983b78","Type":"ContainerStarted","Data":"45cbaadd899227ea7471da1409848ce63a97c2ac3a9626b75c3771002ddbe7a7"} Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.825117 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gc8mc" event={"ID":"23e4847e-5b39-4a3b-aada-5d8c28c162e8","Type":"ContainerStarted","Data":"fa6c084318881c0165ff72c5f2aea9e1f96736fc07a60749574b37ddbde1bf1c"} Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.826638 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb" event={"ID":"9bce819e-baa7-403a-8091-66dbc41187af","Type":"ContainerStarted","Data":"50924bc9c8106f2b75e732fb040092c7dda1a92aca6f9f037f443ce3993810f2"} Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.847602 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" event={"ID":"8ebf4fd3-9460-4809-aa41-7db3a1bc0032","Type":"ContainerStarted","Data":"187b4d33f96a600c627c22434dce8c328dbd8e561cd130e8ea7dde6c823078bf"} Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.850568 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.876110 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-852sw" event={"ID":"5c48be33-f0d6-4b78-9779-fec2a6244f89","Type":"ContainerStarted","Data":"390919cb67f84615bdbc4a9087bf6fbc7266f3d164a11a3aac5a858ed86292b6"} Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.877597 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnrn9" event={"ID":"d6ad1a82-f4da-4591-9622-f1611b8133d9","Type":"ContainerStarted","Data":"3f151dd9d239c188ce2c8e467d954610831a2554080153321a4a0a43f28970cf"} Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.884547 4895 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4zbst container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.884607 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" podUID="8ebf4fd3-9460-4809-aa41-7db3a1bc0032" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.892795 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" podStartSLOduration=132.892772679 podStartE2EDuration="2m12.892772679s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:37.836982903 +0000 UTC m=+159.478491049" watchObservedRunningTime="2026-01-29 08:43:37.892772679 +0000 UTC m=+159.534280825" Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.908624 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:37 crc kubenswrapper[4895]: E0129 08:43:37.909074 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:38.409055302 +0000 UTC m=+160.050563448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.909599 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c" event={"ID":"aa8604f8-8033-4126-bbaf-e2648ed77680","Type":"ContainerStarted","Data":"057e5c381aaf6efd8bd38f47e896279e7f635a5a72cb48fd2f3b866db0699e7e"} Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.933880 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" event={"ID":"8bf5c6f5-7b1f-4bf7-9d47-b181232c1107","Type":"ContainerStarted","Data":"8a1f186de59474f0ceb9486c0c1ecb5fdf2eb8d80a34d564f516f880115bd45f"} Jan 29 08:43:37 crc kubenswrapper[4895]: I0129 08:43:37.934767 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.020529 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-gc8mc" podStartSLOduration=133.020505321 podStartE2EDuration="2m13.020505321s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:38.020303205 +0000 UTC m=+159.661811361" watchObservedRunningTime="2026-01-29 08:43:38.020505321 +0000 UTC m=+159.662013467" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.021040 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.021464 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-xzj4q" podStartSLOduration=133.021458207 podStartE2EDuration="2m13.021458207s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:37.893303364 +0000 UTC m=+159.534811510" watchObservedRunningTime="2026-01-29 08:43:38.021458207 +0000 UTC m=+159.662966363" Jan 29 08:43:38 crc kubenswrapper[4895]: E0129 08:43:38.023249 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:38.523229324 +0000 UTC m=+160.164737470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.090273 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5" event={"ID":"2ee8a736-336a-4b9a-a5d3-df1d4da6da62","Type":"ContainerStarted","Data":"c9a18696b469042a10abd7d8aaf879057dd3467d62f3faccba4df0f7167265be"} Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.097598 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" event={"ID":"6c437b17-e286-4520-bde1-4e13376343e3","Type":"ContainerStarted","Data":"cad5548bbd267bdff861b464828437af0c05793ad1a461942bd33089bba8a5d2"} Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.099665 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z5sff" event={"ID":"ea9f8a45-3fdc-4780-a008-e0f77c99dffc","Type":"ContainerStarted","Data":"e4a35276bc76fa5acc2bb2060b997831c90dfc02c2635c8d04c893775de420f2"} Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.124620 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:38 crc kubenswrapper[4895]: E0129 08:43:38.125093 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:38.625077783 +0000 UTC m=+160.266585929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.171356 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f5cn6" event={"ID":"13c23359-7d69-4f3c-b89a-a25bee602474","Type":"ContainerStarted","Data":"66fe2af9b50f4ad08be3c2223f30bedea5e3cd65210d552995b1773608683811"} Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.183484 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws" event={"ID":"6560d1c1-3b50-45b8-9ed9-d43d0784efba","Type":"ContainerStarted","Data":"1c6a0edd5a4999e09fd0790d1191a515c46d992415fbe9fb993cc243e10f588d"} Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.184373 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.186306 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" event={"ID":"9a967adb-d943-4101-8eed-964b97623a5c","Type":"ContainerStarted","Data":"d37985a70b33f313c0598035f8754c1ebcc681ca2c05860f3ce86453869210a6"} Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.188549 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-k2v76" event={"ID":"3aab1def-9cb4-4b52-b6d5-83b033e82b62","Type":"ContainerStarted","Data":"d966cd605a02cc3deb669e68464a1819970ddf69033a04d1474e4ce5d66f067c"} Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.191616 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx" event={"ID":"98d96d68-0600-4d2c-8f16-3abf329f8483","Type":"ContainerStarted","Data":"4c1ec15c635a7c53428dbe204828455e6034e432b90436d844dbd384810342a3"} Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.194942 4895 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5fc8p container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.194988 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" podUID="e6030804-d717-42c9-b2b2-8eaaadaddca0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.195050 4895 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4fjjn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.195064 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" podUID="d6da3558-9eb3-457a-b1f6-daaaf39ed8ae" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.227857 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.228669 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l9qrr" podStartSLOduration=133.228642797 podStartE2EDuration="2m13.228642797s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:38.227392263 +0000 UTC m=+159.868900419" watchObservedRunningTime="2026-01-29 08:43:38.228642797 +0000 UTC m=+159.870150943" Jan 29 08:43:38 crc kubenswrapper[4895]: E0129 08:43:38.230004 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:38.729962603 +0000 UTC m=+160.371470749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.286512 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" podStartSLOduration=133.286464199 podStartE2EDuration="2m13.286464199s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:38.286363106 +0000 UTC m=+159.927871252" watchObservedRunningTime="2026-01-29 08:43:38.286464199 +0000 UTC m=+159.927972345" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.330494 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:38 crc kubenswrapper[4895]: E0129 08:43:38.334771 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:38.834750491 +0000 UTC m=+160.476258637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.395539 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bhjbb" podStartSLOduration=133.395500532 podStartE2EDuration="2m13.395500532s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:38.392792808 +0000 UTC m=+160.034300974" watchObservedRunningTime="2026-01-29 08:43:38.395500532 +0000 UTC m=+160.037008678" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.433278 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:38 crc kubenswrapper[4895]: E0129 08:43:38.433616 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:38.933582708 +0000 UTC m=+160.575090854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.433729 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:38 crc kubenswrapper[4895]: E0129 08:43:38.434360 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:38.934348498 +0000 UTC m=+160.575856644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.469036 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5" podStartSLOduration=134.46900526 podStartE2EDuration="2m14.46900526s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:38.467638853 +0000 UTC m=+160.109146999" watchObservedRunningTime="2026-01-29 08:43:38.46900526 +0000 UTC m=+160.110513406" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.469168 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c" podStartSLOduration=133.469160904 podStartE2EDuration="2m13.469160904s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:38.433389692 +0000 UTC m=+160.074897838" watchObservedRunningTime="2026-01-29 08:43:38.469160904 +0000 UTC m=+160.110669060" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.535110 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:38 crc kubenswrapper[4895]: E0129 08:43:38.535361 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:39.035323112 +0000 UTC m=+160.676831248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.535460 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:38 crc kubenswrapper[4895]: E0129 08:43:38.535873 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:39.035855787 +0000 UTC m=+160.677363933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.601705 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:38 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:38 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:38 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.601783 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.633647 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws" podStartSLOduration=133.633617504 podStartE2EDuration="2m13.633617504s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:38.588248711 +0000 UTC m=+160.229756857" watchObservedRunningTime="2026-01-29 08:43:38.633617504 +0000 UTC m=+160.275125650" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.639281 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:38 crc kubenswrapper[4895]: E0129 08:43:38.639878 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:39.139854914 +0000 UTC m=+160.781363060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.744940 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:38 crc kubenswrapper[4895]: E0129 08:43:38.745517 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:39.245493194 +0000 UTC m=+160.887001350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.762837 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f5cn6" podStartSLOduration=133.762815676 podStartE2EDuration="2m13.762815676s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:38.69415304 +0000 UTC m=+160.335661186" watchObservedRunningTime="2026-01-29 08:43:38.762815676 +0000 UTC m=+160.404323812" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.763648 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-k2v76" podStartSLOduration=9.763640057 podStartE2EDuration="9.763640057s" podCreationTimestamp="2026-01-29 08:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:38.757795589 +0000 UTC m=+160.399303745" watchObservedRunningTime="2026-01-29 08:43:38.763640057 +0000 UTC m=+160.405148203" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.849241 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:38 crc kubenswrapper[4895]: E0129 08:43:38.849718 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:39.349678736 +0000 UTC m=+160.991186882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.924004 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" podStartSLOduration=134.923979336 podStartE2EDuration="2m14.923979336s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:38.8170927 +0000 UTC m=+160.458600866" watchObservedRunningTime="2026-01-29 08:43:38.923979336 +0000 UTC m=+160.565487482" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.924470 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-z5sff" podStartSLOduration=133.924464448 podStartE2EDuration="2m13.924464448s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:38.924233752 +0000 UTC m=+160.565741898" watchObservedRunningTime="2026-01-29 08:43:38.924464448 +0000 UTC m=+160.565972594" Jan 29 08:43:38 crc kubenswrapper[4895]: I0129 08:43:38.951950 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:38 crc kubenswrapper[4895]: E0129 08:43:38.952436 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:39.452416478 +0000 UTC m=+161.093924624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.023901 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cchbc" Jan 29 08:43:39 crc kubenswrapper[4895]: E0129 08:43:39.053512 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:39.553478395 +0000 UTC m=+161.194986541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.053362 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.054056 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:39 crc kubenswrapper[4895]: E0129 08:43:39.054469 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:39.554459092 +0000 UTC m=+161.195967238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.155733 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:39 crc kubenswrapper[4895]: E0129 08:43:39.156140 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:39.656118565 +0000 UTC m=+161.297626711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.231511 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-852sw" event={"ID":"5c48be33-f0d6-4b78-9779-fec2a6244f89","Type":"ContainerStarted","Data":"b729e6c14cb5397165e663eadb1a351351771e3102101f06181f1680d3b60e5c"} Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.232427 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-852sw" Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.238590 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnrn9" event={"ID":"d6ad1a82-f4da-4591-9622-f1611b8133d9","Type":"ContainerStarted","Data":"4fa264983a4c0842dedad9bfc929e48d7ab9fe514aa9d85ab41b98ed6211f2f3"} Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.238756 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnrn9" event={"ID":"d6ad1a82-f4da-4591-9622-f1611b8133d9","Type":"ContainerStarted","Data":"8dd2cc3289096911537bcec8cb324a1394d22665ef33dda6c35d9b3c7371f6ff"} Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.247116 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhx9c" event={"ID":"aa8604f8-8033-4126-bbaf-e2648ed77680","Type":"ContainerStarted","Data":"070dde6f3f7eb01429165f050532110dca2ab5146d1b05ce8a5753013315be6f"} Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.257051 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:39 crc kubenswrapper[4895]: E0129 08:43:39.257602 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:39.757583332 +0000 UTC m=+161.399091478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.274311 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx" event={"ID":"98d96d68-0600-4d2c-8f16-3abf329f8483","Type":"ContainerStarted","Data":"4924f6521746f35fdac567059e64be7347ca84141a7a74215b9347a1ea4b0567"} Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.290312 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njfl7" event={"ID":"40ec3f19-6989-4c7c-92e2-1d1501a75b24","Type":"ContainerStarted","Data":"e0d456a9d3096a588d3d7d40ef1c789507de2197b41660bf470fdd5c3e13afe2"} Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.316635 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" event={"ID":"d42bddeb-f93a-4603-a38e-1016ca2b3a03","Type":"ContainerStarted","Data":"5a64b142a76b6c87a3c1406fae2a0cb677f914d8bd286e32ff3923312888e44c"} Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.359566 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:39 crc kubenswrapper[4895]: E0129 08:43:39.362455 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:39.862421941 +0000 UTC m=+161.503930087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.372182 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" event={"ID":"9a967adb-d943-4101-8eed-964b97623a5c","Type":"ContainerStarted","Data":"0c75dffad7a4b99e0d8ce4b0d4d723751330169451d88265b8eb134b98367868"} Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.372264 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" event={"ID":"9a967adb-d943-4101-8eed-964b97623a5c","Type":"ContainerStarted","Data":"9b906a26c05e85a4b021213b8ed9a1e620ff322812f28b6a72a5e43244c8199a"} Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.402781 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" event={"ID":"6c437b17-e286-4520-bde1-4e13376343e3","Type":"ContainerStarted","Data":"157a7605983103ab2e98bccdc780a3afcee03e1785ba776090173eec6d114f8c"} Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.402866 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" event={"ID":"6c437b17-e286-4520-bde1-4e13376343e3","Type":"ContainerStarted","Data":"31c7ce3e2b903ecf02924f92b0e65833a9f466e65818c35b42c43656478f2d6f"} Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.430130 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" event={"ID":"8ebf4fd3-9460-4809-aa41-7db3a1bc0032","Type":"ContainerStarted","Data":"64083c36e1eeec5efba64aaee96b40e8285c17376f541e5b12dcc7ef37621bdd"} Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.433293 4895 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4zbst container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.433357 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" podUID="8ebf4fd3-9460-4809-aa41-7db3a1bc0032" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.433590 4895 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rmbm8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.433681 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" podUID="8f6088a3-2691-4029-a576-2a5abcd3b107" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.452451 4895 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j5v6v container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.452533 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" podUID="8bf5c6f5-7b1f-4bf7-9d47-b181232c1107" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.462434 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.471482 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fjjn" Jan 29 08:43:39 crc kubenswrapper[4895]: E0129 08:43:39.474663 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:39.974646161 +0000 UTC m=+161.616154307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.570380 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:39 crc kubenswrapper[4895]: E0129 08:43:39.571718 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:40.071703269 +0000 UTC m=+161.713211415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.603526 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:39 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:39 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:39 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.603601 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.675019 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:39 crc kubenswrapper[4895]: E0129 08:43:39.675530 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:40.175512961 +0000 UTC m=+161.817021107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.778308 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:39 crc kubenswrapper[4895]: E0129 08:43:39.778568 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:40.27850936 +0000 UTC m=+161.920017506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.785821 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:39 crc kubenswrapper[4895]: E0129 08:43:39.787354 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:40.287322719 +0000 UTC m=+161.928830875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.875451 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lmszx" podStartSLOduration=134.875432074 podStartE2EDuration="2m14.875432074s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:39.870382177 +0000 UTC m=+161.511890323" watchObservedRunningTime="2026-01-29 08:43:39.875432074 +0000 UTC m=+161.516940220" Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.899756 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:39 crc kubenswrapper[4895]: E0129 08:43:39.900272 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:40.400248178 +0000 UTC m=+162.041756334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.966135 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.966222 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.991486 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.991595 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.991939 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:43:39 crc kubenswrapper[4895]: I0129 08:43:39.992020 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.035020 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jpb5z" podStartSLOduration=135.0349902 podStartE2EDuration="2m15.0349902s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:39.930399168 +0000 UTC m=+161.571907314" watchObservedRunningTime="2026-01-29 08:43:40.0349902 +0000 UTC m=+161.676498336" Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.035407 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:40 crc kubenswrapper[4895]: E0129 08:43:40.036128 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:40.536106211 +0000 UTC m=+162.177614377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.098609 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-852sw" podStartSLOduration=11.098583949 podStartE2EDuration="11.098583949s" podCreationTimestamp="2026-01-29 08:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:40.097586872 +0000 UTC m=+161.739095038" watchObservedRunningTime="2026-01-29 08:43:40.098583949 +0000 UTC m=+161.740092095" Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.139214 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:40 crc kubenswrapper[4895]: E0129 08:43:40.140769 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:40.640749495 +0000 UTC m=+162.282257641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.151969 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.171201 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8dlcj" Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.203484 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnrn9" podStartSLOduration=135.203410438 podStartE2EDuration="2m15.203410438s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:40.146490391 +0000 UTC m=+161.787998567" watchObservedRunningTime="2026-01-29 08:43:40.203410438 +0000 UTC m=+161.844918584" Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.244894 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.246272 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" podStartSLOduration=136.246244702 podStartE2EDuration="2m16.246244702s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:40.206345048 +0000 UTC m=+161.847853214" watchObservedRunningTime="2026-01-29 08:43:40.246244702 +0000 UTC m=+161.887752848" Jan 29 08:43:40 crc kubenswrapper[4895]: E0129 08:43:40.246623 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:40.746597011 +0000 UTC m=+162.388105357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.341664 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v4mz4" podStartSLOduration=135.341636644 podStartE2EDuration="2m15.341636644s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:40.268428215 +0000 UTC m=+161.909936361" watchObservedRunningTime="2026-01-29 08:43:40.341636644 +0000 UTC m=+161.983144790" Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.353560 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:40 crc kubenswrapper[4895]: E0129 08:43:40.354016 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:40.85399437 +0000 UTC m=+162.495502516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.431786 4895 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5fc8p container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.431870 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" podUID="e6030804-d717-42c9-b2b2-8eaaadaddca0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.439084 4895 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rmbm8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.439147 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" podUID="8f6088a3-2691-4029-a576-2a5abcd3b107" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.439971 4895 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4zbst container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.440415 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" podUID="8ebf4fd3-9460-4809-aa41-7db3a1bc0032" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.455218 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:40 crc kubenswrapper[4895]: E0129 08:43:40.455786 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:40.955768857 +0000 UTC m=+162.597277003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.556265 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:40 crc kubenswrapper[4895]: E0129 08:43:40.556778 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:41.056749731 +0000 UTC m=+162.698257877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.660719 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:40 crc kubenswrapper[4895]: E0129 08:43:40.661156 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:41.161138867 +0000 UTC m=+162.802647013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.705263 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:40 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:40 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:40 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.705361 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.762383 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:40 crc kubenswrapper[4895]: E0129 08:43:40.762636 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:41.262598945 +0000 UTC m=+162.904107091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.762758 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:40 crc kubenswrapper[4895]: E0129 08:43:40.763215 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:41.263200671 +0000 UTC m=+162.904708817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.864376 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:40 crc kubenswrapper[4895]: E0129 08:43:40.864645 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:41.364607518 +0000 UTC m=+163.006115664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.864709 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:40 crc kubenswrapper[4895]: E0129 08:43:40.865049 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:41.365036149 +0000 UTC m=+163.006544295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.966002 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:40 crc kubenswrapper[4895]: E0129 08:43:40.966223 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:41.466188998 +0000 UTC m=+163.107697144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.966278 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:40 crc kubenswrapper[4895]: E0129 08:43:40.966760 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:41.466745714 +0000 UTC m=+163.108253860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.967732 4895 patch_prober.go:28] interesting pod/apiserver-76f77b778f-klns8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 08:43:40 crc kubenswrapper[4895]: [+]log ok Jan 29 08:43:40 crc kubenswrapper[4895]: [+]etcd ok Jan 29 08:43:40 crc kubenswrapper[4895]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 08:43:40 crc kubenswrapper[4895]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 08:43:40 crc kubenswrapper[4895]: [+]poststarthook/max-in-flight-filter ok Jan 29 08:43:40 crc kubenswrapper[4895]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 08:43:40 crc kubenswrapper[4895]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 08:43:40 crc kubenswrapper[4895]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 08:43:40 crc kubenswrapper[4895]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 29 08:43:40 crc kubenswrapper[4895]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 08:43:40 crc kubenswrapper[4895]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 08:43:40 crc kubenswrapper[4895]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 29 08:43:40 crc kubenswrapper[4895]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 08:43:40 crc kubenswrapper[4895]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 08:43:40 crc kubenswrapper[4895]: livez check failed Jan 29 08:43:40 crc kubenswrapper[4895]: I0129 08:43:40.967788 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-klns8" podUID="0b82afa3-8f94-41e2-828e-4debc9e73088" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.024437 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.025208 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.028259 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.033597 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.067719 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.068599 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:41 crc kubenswrapper[4895]: E0129 08:43:41.072616 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:41.572552369 +0000 UTC m=+163.214060525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.082463 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:41 crc kubenswrapper[4895]: E0129 08:43:41.083509 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:41.583493746 +0000 UTC m=+163.225001892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.131608 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nfrjk"] Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.134261 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.166343 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.167257 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nfrjk"] Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.187062 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.187369 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8ffe82a-3487-49fb-a79b-737dd5effd12-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8ffe82a-3487-49fb-a79b-737dd5effd12\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.187443 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8ffe82a-3487-49fb-a79b-737dd5effd12-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8ffe82a-3487-49fb-a79b-737dd5effd12\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:43:41 crc kubenswrapper[4895]: E0129 08:43:41.187578 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:41.687557994 +0000 UTC m=+163.329066140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.262485 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c8p6f"] Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.265779 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.270337 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.291040 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0349d46c-bf39-4ba0-99be-22445866386b-catalog-content\") pod \"certified-operators-nfrjk\" (UID: \"0349d46c-bf39-4ba0-99be-22445866386b\") " pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.291113 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8ffe82a-3487-49fb-a79b-737dd5effd12-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8ffe82a-3487-49fb-a79b-737dd5effd12\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.291142 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztwrd\" (UniqueName: \"kubernetes.io/projected/0349d46c-bf39-4ba0-99be-22445866386b-kube-api-access-ztwrd\") pod \"certified-operators-nfrjk\" (UID: \"0349d46c-bf39-4ba0-99be-22445866386b\") " pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.291185 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.291224 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8ffe82a-3487-49fb-a79b-737dd5effd12-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8ffe82a-3487-49fb-a79b-737dd5effd12\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.291259 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0349d46c-bf39-4ba0-99be-22445866386b-utilities\") pod \"certified-operators-nfrjk\" (UID: \"0349d46c-bf39-4ba0-99be-22445866386b\") " pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.291418 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8ffe82a-3487-49fb-a79b-737dd5effd12-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8ffe82a-3487-49fb-a79b-737dd5effd12\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:43:41 crc kubenswrapper[4895]: E0129 08:43:41.291763 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:41.791742017 +0000 UTC m=+163.433250163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.308292 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8p6f"] Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.348350 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8ffe82a-3487-49fb-a79b-737dd5effd12-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8ffe82a-3487-49fb-a79b-737dd5effd12\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.377879 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.392947 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.393241 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0349d46c-bf39-4ba0-99be-22445866386b-catalog-content\") pod \"certified-operators-nfrjk\" (UID: \"0349d46c-bf39-4ba0-99be-22445866386b\") " pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.393286 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtsfn\" (UniqueName: \"kubernetes.io/projected/a0b31a89-9993-4996-8b19-961efcb757ed-kube-api-access-rtsfn\") pod \"community-operators-c8p6f\" (UID: \"a0b31a89-9993-4996-8b19-961efcb757ed\") " pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.393335 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztwrd\" (UniqueName: \"kubernetes.io/projected/0349d46c-bf39-4ba0-99be-22445866386b-kube-api-access-ztwrd\") pod \"certified-operators-nfrjk\" (UID: \"0349d46c-bf39-4ba0-99be-22445866386b\") " pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.393357 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b31a89-9993-4996-8b19-961efcb757ed-catalog-content\") pod \"community-operators-c8p6f\" (UID: \"a0b31a89-9993-4996-8b19-961efcb757ed\") " pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.393407 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b31a89-9993-4996-8b19-961efcb757ed-utilities\") pod \"community-operators-c8p6f\" (UID: \"a0b31a89-9993-4996-8b19-961efcb757ed\") " pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.393442 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0349d46c-bf39-4ba0-99be-22445866386b-utilities\") pod \"certified-operators-nfrjk\" (UID: \"0349d46c-bf39-4ba0-99be-22445866386b\") " pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.394010 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0349d46c-bf39-4ba0-99be-22445866386b-utilities\") pod \"certified-operators-nfrjk\" (UID: \"0349d46c-bf39-4ba0-99be-22445866386b\") " pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:43:41 crc kubenswrapper[4895]: E0129 08:43:41.394113 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:41.894092468 +0000 UTC m=+163.535600614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.394389 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0349d46c-bf39-4ba0-99be-22445866386b-catalog-content\") pod \"certified-operators-nfrjk\" (UID: \"0349d46c-bf39-4ba0-99be-22445866386b\") " pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.431034 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztwrd\" (UniqueName: \"kubernetes.io/projected/0349d46c-bf39-4ba0-99be-22445866386b-kube-api-access-ztwrd\") pod \"certified-operators-nfrjk\" (UID: \"0349d46c-bf39-4ba0-99be-22445866386b\") " pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.464030 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2d5f8"] Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.465420 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.488497 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.494752 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtsfn\" (UniqueName: \"kubernetes.io/projected/a0b31a89-9993-4996-8b19-961efcb757ed-kube-api-access-rtsfn\") pod \"community-operators-c8p6f\" (UID: \"a0b31a89-9993-4996-8b19-961efcb757ed\") " pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.494835 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b31a89-9993-4996-8b19-961efcb757ed-catalog-content\") pod \"community-operators-c8p6f\" (UID: \"a0b31a89-9993-4996-8b19-961efcb757ed\") " pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.494874 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.494902 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b31a89-9993-4996-8b19-961efcb757ed-utilities\") pod \"community-operators-c8p6f\" (UID: \"a0b31a89-9993-4996-8b19-961efcb757ed\") " pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.495397 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b31a89-9993-4996-8b19-961efcb757ed-utilities\") pod \"community-operators-c8p6f\" (UID: \"a0b31a89-9993-4996-8b19-961efcb757ed\") " pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.496027 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b31a89-9993-4996-8b19-961efcb757ed-catalog-content\") pod \"community-operators-c8p6f\" (UID: \"a0b31a89-9993-4996-8b19-961efcb757ed\") " pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:43:41 crc kubenswrapper[4895]: E0129 08:43:41.496349 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:41.996331977 +0000 UTC m=+163.637840123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.510044 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2d5f8"] Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.520498 4895 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j5v6v container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.520576 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" podUID="8bf5c6f5-7b1f-4bf7-9d47-b181232c1107" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.522302 4895 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j5v6v container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.522399 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" podUID="8bf5c6f5-7b1f-4bf7-9d47-b181232c1107" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.526616 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtsfn\" (UniqueName: \"kubernetes.io/projected/a0b31a89-9993-4996-8b19-961efcb757ed-kube-api-access-rtsfn\") pod \"community-operators-c8p6f\" (UID: \"a0b31a89-9993-4996-8b19-961efcb757ed\") " pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.536249 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njfl7" event={"ID":"40ec3f19-6989-4c7c-92e2-1d1501a75b24","Type":"ContainerStarted","Data":"54c7e660a6357950246579acec1baed5ba14182d64db25f2f3206983355bbf77"} Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.596259 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.597056 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.597406 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkl55\" (UniqueName: \"kubernetes.io/projected/b8e96926-7c32-4a64-b37d-342a66d925ea-kube-api-access-bkl55\") pod \"certified-operators-2d5f8\" (UID: \"b8e96926-7c32-4a64-b37d-342a66d925ea\") " pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:43:41 crc kubenswrapper[4895]: E0129 08:43:41.597503 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:42.097477585 +0000 UTC m=+163.738985881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.597529 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.597678 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.597775 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8e96926-7c32-4a64-b37d-342a66d925ea-catalog-content\") pod \"certified-operators-2d5f8\" (UID: \"b8e96926-7c32-4a64-b37d-342a66d925ea\") " pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.597807 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8e96926-7c32-4a64-b37d-342a66d925ea-utilities\") pod \"certified-operators-2d5f8\" (UID: \"b8e96926-7c32-4a64-b37d-342a66d925ea\") " pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:43:41 crc kubenswrapper[4895]: E0129 08:43:41.598162 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:42.098152844 +0000 UTC m=+163.739660990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.618201 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:41 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:41 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:41 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.618280 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.678416 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l25mt"] Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.679532 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.700653 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.701021 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8e96926-7c32-4a64-b37d-342a66d925ea-catalog-content\") pod \"certified-operators-2d5f8\" (UID: \"b8e96926-7c32-4a64-b37d-342a66d925ea\") " pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.701082 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8e96926-7c32-4a64-b37d-342a66d925ea-utilities\") pod \"certified-operators-2d5f8\" (UID: \"b8e96926-7c32-4a64-b37d-342a66d925ea\") " pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.701138 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkl55\" (UniqueName: \"kubernetes.io/projected/b8e96926-7c32-4a64-b37d-342a66d925ea-kube-api-access-bkl55\") pod \"certified-operators-2d5f8\" (UID: \"b8e96926-7c32-4a64-b37d-342a66d925ea\") " pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:43:41 crc kubenswrapper[4895]: E0129 08:43:41.703741 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:42.203709733 +0000 UTC m=+163.845217879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.705028 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8e96926-7c32-4a64-b37d-342a66d925ea-catalog-content\") pod \"certified-operators-2d5f8\" (UID: \"b8e96926-7c32-4a64-b37d-342a66d925ea\") " pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.705268 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8e96926-7c32-4a64-b37d-342a66d925ea-utilities\") pod \"certified-operators-2d5f8\" (UID: \"b8e96926-7c32-4a64-b37d-342a66d925ea\") " pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.716193 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l25mt"] Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.750804 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkl55\" (UniqueName: \"kubernetes.io/projected/b8e96926-7c32-4a64-b37d-342a66d925ea-kube-api-access-bkl55\") pod \"certified-operators-2d5f8\" (UID: \"b8e96926-7c32-4a64-b37d-342a66d925ea\") " pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.833453 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m99kq\" (UniqueName: \"kubernetes.io/projected/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-kube-api-access-m99kq\") pod \"community-operators-l25mt\" (UID: \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\") " pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.833503 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-catalog-content\") pod \"community-operators-l25mt\" (UID: \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\") " pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.833545 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.833572 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-utilities\") pod \"community-operators-l25mt\" (UID: \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\") " pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.833775 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:43:41 crc kubenswrapper[4895]: E0129 08:43:41.834652 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:42.334635632 +0000 UTC m=+163.976143778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.941716 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.949518 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.949702 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-catalog-content\") pod \"community-operators-l25mt\" (UID: \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\") " pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.949752 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-utilities\") pod \"community-operators-l25mt\" (UID: \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\") " pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.949824 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m99kq\" (UniqueName: \"kubernetes.io/projected/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-kube-api-access-m99kq\") pod \"community-operators-l25mt\" (UID: \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\") " pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:43:41 crc kubenswrapper[4895]: E0129 08:43:41.950185 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:42.450166381 +0000 UTC m=+164.091674527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.951039 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-catalog-content\") pod \"community-operators-l25mt\" (UID: \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\") " pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:43:41 crc kubenswrapper[4895]: I0129 08:43:41.951285 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-utilities\") pod \"community-operators-l25mt\" (UID: \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\") " pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.007302 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m99kq\" (UniqueName: \"kubernetes.io/projected/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-kube-api-access-m99kq\") pod \"community-operators-l25mt\" (UID: \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\") " pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.007665 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.054087 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:42 crc kubenswrapper[4895]: E0129 08:43:42.060795 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:42.560771687 +0000 UTC m=+164.202279833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.260750 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:42 crc kubenswrapper[4895]: E0129 08:43:42.261313 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:42.761291836 +0000 UTC m=+164.402799982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.297804 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.297841 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.392362 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:42 crc kubenswrapper[4895]: E0129 08:43:42.392863 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:42.892848572 +0000 UTC m=+164.534356718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.432009 4895 patch_prober.go:28] interesting pod/console-f9d7485db-z5sff container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.432545 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-z5sff" podUID="ea9f8a45-3fdc-4780-a008-e0f77c99dffc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.494872 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:42 crc kubenswrapper[4895]: E0129 08:43:42.495124 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:42.995086901 +0000 UTC m=+164.636595047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.495524 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:42 crc kubenswrapper[4895]: E0129 08:43:42.496129 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:42.996107369 +0000 UTC m=+164.637615525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.601555 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:42 crc kubenswrapper[4895]: E0129 08:43:42.601890 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:43.101869203 +0000 UTC m=+164.743377349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.610616 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njfl7" event={"ID":"40ec3f19-6989-4c7c-92e2-1d1501a75b24","Type":"ContainerStarted","Data":"b3cdc517838e302c0a83a94deae1b96b1e59898431cc103cb0cea661ae695e93"} Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.615023 4895 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rmbm8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.615081 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" podUID="8f6088a3-2691-4029-a576-2a5abcd3b107" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.615115 4895 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rmbm8 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.615176 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" podUID="8f6088a3-2691-4029-a576-2a5abcd3b107" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.622379 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:42 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:42 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:42 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.622482 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.711859 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:42 crc kubenswrapper[4895]: E0129 08:43:42.712388 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:43.212372626 +0000 UTC m=+164.853880782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.727095 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4zbst" Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.812888 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:42 crc kubenswrapper[4895]: E0129 08:43:42.813544 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:43.313523266 +0000 UTC m=+164.955031412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.832764 4895 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.881784 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 08:43:42 crc kubenswrapper[4895]: I0129 08:43:42.988740 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:42 crc kubenswrapper[4895]: E0129 08:43:42.991395 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:43.491374679 +0000 UTC m=+165.132882835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.082726 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-85kbq"] Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.086105 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.091028 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.093394 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 08:43:43 crc kubenswrapper[4895]: E0129 08:43:43.094004 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:43.593965407 +0000 UTC m=+165.235473553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.094778 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-85kbq"] Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.103274 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:43 crc kubenswrapper[4895]: E0129 08:43:43.103638 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:43.60362186 +0000 UTC m=+165.245130006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.174813 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nfrjk"] Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.217059 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.217407 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrgzm\" (UniqueName: \"kubernetes.io/projected/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-kube-api-access-wrgzm\") pod \"redhat-marketplace-85kbq\" (UID: \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\") " pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.217470 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-utilities\") pod \"redhat-marketplace-85kbq\" (UID: \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\") " pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.217534 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-catalog-content\") pod \"redhat-marketplace-85kbq\" (UID: \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\") " pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:43:43 crc kubenswrapper[4895]: E0129 08:43:43.217770 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:43.717740771 +0000 UTC m=+165.359248917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.269331 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2d5f8"] Jan 29 08:43:43 crc kubenswrapper[4895]: W0129 08:43:43.305147 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8e96926_7c32_4a64_b37d_342a66d925ea.slice/crio-2d08763cf1982f8c64ded4f6ac54dacc982c229be8ab06e950855748f29c892f WatchSource:0}: Error finding container 2d08763cf1982f8c64ded4f6ac54dacc982c229be8ab06e950855748f29c892f: Status 404 returned error can't find the container with id 2d08763cf1982f8c64ded4f6ac54dacc982c229be8ab06e950855748f29c892f Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.320306 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrgzm\" (UniqueName: \"kubernetes.io/projected/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-kube-api-access-wrgzm\") pod \"redhat-marketplace-85kbq\" (UID: \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\") " pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.320368 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.320398 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-utilities\") pod \"redhat-marketplace-85kbq\" (UID: \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\") " pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.320669 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-catalog-content\") pod \"redhat-marketplace-85kbq\" (UID: \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\") " pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.321601 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-utilities\") pod \"redhat-marketplace-85kbq\" (UID: \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\") " pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:43:43 crc kubenswrapper[4895]: E0129 08:43:43.322103 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 08:43:43.822066427 +0000 UTC m=+165.463574563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bt8sz" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.322326 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-catalog-content\") pod \"redhat-marketplace-85kbq\" (UID: \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\") " pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.365517 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrgzm\" (UniqueName: \"kubernetes.io/projected/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-kube-api-access-wrgzm\") pod \"redhat-marketplace-85kbq\" (UID: \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\") " pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.421646 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:43 crc kubenswrapper[4895]: E0129 08:43:43.422066 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 08:43:43.922048203 +0000 UTC m=+165.563556349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.456085 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.459218 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wtxmb"] Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.460641 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.463193 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l25mt"] Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.477229 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8p6f"] Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.480533 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtxmb"] Jan 29 08:43:43 crc kubenswrapper[4895]: W0129 08:43:43.485201 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod886dfa02_5b87_4bdf_9bf5_fcd914ff2afb.slice/crio-64c867c098b286972ccd47007db6511ecf118a0e7dced0709126f7308ff70125 WatchSource:0}: Error finding container 64c867c098b286972ccd47007db6511ecf118a0e7dced0709126f7308ff70125: Status 404 returned error can't find the container with id 64c867c098b286972ccd47007db6511ecf118a0e7dced0709126f7308ff70125 Jan 29 08:43:43 crc kubenswrapper[4895]: W0129 08:43:43.487743 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0b31a89_9993_4996_8b19_961efcb757ed.slice/crio-4cc7263fe7741a49ddca371c22dd6537efc466132608f6fe8762790778cc18c9 WatchSource:0}: Error finding container 4cc7263fe7741a49ddca371c22dd6537efc466132608f6fe8762790778cc18c9: Status 404 returned error can't find the container with id 4cc7263fe7741a49ddca371c22dd6537efc466132608f6fe8762790778cc18c9 Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.487832 4895 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T08:43:42.83319638Z","Handler":null,"Name":""} Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.518527 4895 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.518593 4895 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.526216 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j5v6v" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.526344 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.526454 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khfjk\" (UniqueName: \"kubernetes.io/projected/3eb3534d-8971-42ec-8aaf-a970b786e631-kube-api-access-khfjk\") pod \"redhat-marketplace-wtxmb\" (UID: \"3eb3534d-8971-42ec-8aaf-a970b786e631\") " pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.526512 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb3534d-8971-42ec-8aaf-a970b786e631-utilities\") pod \"redhat-marketplace-wtxmb\" (UID: \"3eb3534d-8971-42ec-8aaf-a970b786e631\") " pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.526552 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb3534d-8971-42ec-8aaf-a970b786e631-catalog-content\") pod \"redhat-marketplace-wtxmb\" (UID: \"3eb3534d-8971-42ec-8aaf-a970b786e631\") " pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.529954 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.530390 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.580690 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bt8sz\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.607846 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:43 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:43 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:43 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.608019 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.627671 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.628033 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb3534d-8971-42ec-8aaf-a970b786e631-utilities\") pod \"redhat-marketplace-wtxmb\" (UID: \"3eb3534d-8971-42ec-8aaf-a970b786e631\") " pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.628097 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb3534d-8971-42ec-8aaf-a970b786e631-catalog-content\") pod \"redhat-marketplace-wtxmb\" (UID: \"3eb3534d-8971-42ec-8aaf-a970b786e631\") " pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.628239 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khfjk\" (UniqueName: \"kubernetes.io/projected/3eb3534d-8971-42ec-8aaf-a970b786e631-kube-api-access-khfjk\") pod \"redhat-marketplace-wtxmb\" (UID: \"3eb3534d-8971-42ec-8aaf-a970b786e631\") " pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.629280 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb3534d-8971-42ec-8aaf-a970b786e631-utilities\") pod \"redhat-marketplace-wtxmb\" (UID: \"3eb3534d-8971-42ec-8aaf-a970b786e631\") " pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.629356 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb3534d-8971-42ec-8aaf-a970b786e631-catalog-content\") pod \"redhat-marketplace-wtxmb\" (UID: \"3eb3534d-8971-42ec-8aaf-a970b786e631\") " pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.630834 4895 generic.go:334] "Generic (PLEG): container finished" podID="0349d46c-bf39-4ba0-99be-22445866386b" containerID="1968b6153f474f79cad91112f3690e12adb3e5d39b50037f695a7187996393fb" exitCode=0 Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.630987 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nfrjk" event={"ID":"0349d46c-bf39-4ba0-99be-22445866386b","Type":"ContainerDied","Data":"1968b6153f474f79cad91112f3690e12adb3e5d39b50037f695a7187996393fb"} Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.631027 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nfrjk" event={"ID":"0349d46c-bf39-4ba0-99be-22445866386b","Type":"ContainerStarted","Data":"8405444788f55533c58471370a810b47db0fce372d2c52743ef496a45a7873c9"} Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.633145 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.638568 4895 generic.go:334] "Generic (PLEG): container finished" podID="b8e96926-7c32-4a64-b37d-342a66d925ea" containerID="6b7a6ac32292da121904182281cbd50dbb76cfc24b8684500df8b7f74ff4e82b" exitCode=0 Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.638682 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2d5f8" event={"ID":"b8e96926-7c32-4a64-b37d-342a66d925ea","Type":"ContainerDied","Data":"6b7a6ac32292da121904182281cbd50dbb76cfc24b8684500df8b7f74ff4e82b"} Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.638719 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2d5f8" event={"ID":"b8e96926-7c32-4a64-b37d-342a66d925ea","Type":"ContainerStarted","Data":"2d08763cf1982f8c64ded4f6ac54dacc982c229be8ab06e950855748f29c892f"} Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.642642 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l25mt" event={"ID":"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb","Type":"ContainerStarted","Data":"64c867c098b286972ccd47007db6511ecf118a0e7dced0709126f7308ff70125"} Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.643586 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8p6f" event={"ID":"a0b31a89-9993-4996-8b19-961efcb757ed","Type":"ContainerStarted","Data":"4cc7263fe7741a49ddca371c22dd6537efc466132608f6fe8762790778cc18c9"} Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.645614 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njfl7" event={"ID":"40ec3f19-6989-4c7c-92e2-1d1501a75b24","Type":"ContainerStarted","Data":"54134e5d8bd8afb9cd18dbed3694f612424f22aeea32b52d81daa77c78fcb8b5"} Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.649363 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a8ffe82a-3487-49fb-a79b-737dd5effd12","Type":"ContainerStarted","Data":"66857798e0c88d94aaa6bc7125026ed4dbc94ae510cc18faa4d89c3bd6c67315"} Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.649465 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a8ffe82a-3487-49fb-a79b-737dd5effd12","Type":"ContainerStarted","Data":"c4faec8eea11769a905eb3a1beb50ca3b25e1e08edf6827d07356a99093af7aa"} Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.650678 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.658941 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khfjk\" (UniqueName: \"kubernetes.io/projected/3eb3534d-8971-42ec-8aaf-a970b786e631-kube-api-access-khfjk\") pod \"redhat-marketplace-wtxmb\" (UID: \"3eb3534d-8971-42ec-8aaf-a970b786e631\") " pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.721338 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-njfl7" podStartSLOduration=14.721306637 podStartE2EDuration="14.721306637s" podCreationTimestamp="2026-01-29 08:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:43.716081715 +0000 UTC m=+165.357589881" watchObservedRunningTime="2026-01-29 08:43:43.721306637 +0000 UTC m=+165.362814783" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.768115 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.768092078 podStartE2EDuration="3.768092078s" podCreationTimestamp="2026-01-29 08:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:43.763321109 +0000 UTC m=+165.404829255" watchObservedRunningTime="2026-01-29 08:43:43.768092078 +0000 UTC m=+165.409600224" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.785856 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.877174 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:43 crc kubenswrapper[4895]: I0129 08:43:43.892753 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-85kbq"] Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.257445 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b2r4l"] Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.259123 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.261896 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.268876 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b2r4l"] Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.413776 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bt8sz"] Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.444187 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-catalog-content\") pod \"redhat-operators-b2r4l\" (UID: \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\") " pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.444257 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-utilities\") pod \"redhat-operators-b2r4l\" (UID: \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\") " pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.444351 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8tnz\" (UniqueName: \"kubernetes.io/projected/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-kube-api-access-j8tnz\") pod \"redhat-operators-b2r4l\" (UID: \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\") " pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.475518 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtxmb"] Jan 29 08:43:44 crc kubenswrapper[4895]: W0129 08:43:44.479291 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod766285e2_63c4_4073_9b24_d5fbf4b26638.slice/crio-06b3c269250f81fd7edb9dc4f1e4116846e2b526301631ce54c717d3b09bd1dc WatchSource:0}: Error finding container 06b3c269250f81fd7edb9dc4f1e4116846e2b526301631ce54c717d3b09bd1dc: Status 404 returned error can't find the container with id 06b3c269250f81fd7edb9dc4f1e4116846e2b526301631ce54c717d3b09bd1dc Jan 29 08:43:44 crc kubenswrapper[4895]: W0129 08:43:44.481876 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eb3534d_8971_42ec_8aaf_a970b786e631.slice/crio-ca9f2fd489e64420852037e14a5788202c3f7211c2d7cb90379424a98112daeb WatchSource:0}: Error finding container ca9f2fd489e64420852037e14a5788202c3f7211c2d7cb90379424a98112daeb: Status 404 returned error can't find the container with id ca9f2fd489e64420852037e14a5788202c3f7211c2d7cb90379424a98112daeb Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.546662 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8tnz\" (UniqueName: \"kubernetes.io/projected/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-kube-api-access-j8tnz\") pod \"redhat-operators-b2r4l\" (UID: \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\") " pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.546772 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-catalog-content\") pod \"redhat-operators-b2r4l\" (UID: \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\") " pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.546812 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-utilities\") pod \"redhat-operators-b2r4l\" (UID: \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\") " pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.547566 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-utilities\") pod \"redhat-operators-b2r4l\" (UID: \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\") " pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.549132 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-catalog-content\") pod \"redhat-operators-b2r4l\" (UID: \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\") " pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.577132 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8tnz\" (UniqueName: \"kubernetes.io/projected/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-kube-api-access-j8tnz\") pod \"redhat-operators-b2r4l\" (UID: \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\") " pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.601000 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:44 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:44 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:44 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.601575 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.621619 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.655748 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xgjxz"] Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.657360 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.679602 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xgjxz"] Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.698487 4895 generic.go:334] "Generic (PLEG): container finished" podID="886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" containerID="c516f925cbb38c2a0e36e4a49ede7ada25386608daca55e0968c629a2c4bbc50" exitCode=0 Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.698619 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l25mt" event={"ID":"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb","Type":"ContainerDied","Data":"c516f925cbb38c2a0e36e4a49ede7ada25386608daca55e0968c629a2c4bbc50"} Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.710806 4895 generic.go:334] "Generic (PLEG): container finished" podID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" containerID="ec07d9d211714fe810519f87b111b9ebc115688661492029c032fba2aed09110" exitCode=0 Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.712030 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85kbq" event={"ID":"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8","Type":"ContainerDied","Data":"ec07d9d211714fe810519f87b111b9ebc115688661492029c032fba2aed09110"} Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.712149 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85kbq" event={"ID":"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8","Type":"ContainerStarted","Data":"9ee2c74fdeeb557eb39bad2302aa026dfe87aea192849bd8b76915ff18165e02"} Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.720012 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtxmb" event={"ID":"3eb3534d-8971-42ec-8aaf-a970b786e631","Type":"ContainerStarted","Data":"ca9f2fd489e64420852037e14a5788202c3f7211c2d7cb90379424a98112daeb"} Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.729756 4895 generic.go:334] "Generic (PLEG): container finished" podID="a0b31a89-9993-4996-8b19-961efcb757ed" containerID="6ebf6543c611863f14b54c48306b76f55675f19a31fe2a9a7902e422e1d2eeca" exitCode=0 Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.730016 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8p6f" event={"ID":"a0b31a89-9993-4996-8b19-961efcb757ed","Type":"ContainerDied","Data":"6ebf6543c611863f14b54c48306b76f55675f19a31fe2a9a7902e422e1d2eeca"} Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.737970 4895 generic.go:334] "Generic (PLEG): container finished" podID="a8ffe82a-3487-49fb-a79b-737dd5effd12" containerID="66857798e0c88d94aaa6bc7125026ed4dbc94ae510cc18faa4d89c3bd6c67315" exitCode=0 Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.738067 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a8ffe82a-3487-49fb-a79b-737dd5effd12","Type":"ContainerDied","Data":"66857798e0c88d94aaa6bc7125026ed4dbc94ae510cc18faa4d89c3bd6c67315"} Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.740378 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" event={"ID":"766285e2-63c4-4073-9b24-d5fbf4b26638","Type":"ContainerStarted","Data":"06b3c269250f81fd7edb9dc4f1e4116846e2b526301631ce54c717d3b09bd1dc"} Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.761287 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkzlf\" (UniqueName: \"kubernetes.io/projected/c14af255-6e29-4bea-978b-8b5bf6285bd8-kube-api-access-qkzlf\") pod \"redhat-operators-xgjxz\" (UID: \"c14af255-6e29-4bea-978b-8b5bf6285bd8\") " pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.761333 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c14af255-6e29-4bea-978b-8b5bf6285bd8-catalog-content\") pod \"redhat-operators-xgjxz\" (UID: \"c14af255-6e29-4bea-978b-8b5bf6285bd8\") " pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.761381 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c14af255-6e29-4bea-978b-8b5bf6285bd8-utilities\") pod \"redhat-operators-xgjxz\" (UID: \"c14af255-6e29-4bea-978b-8b5bf6285bd8\") " pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.864844 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkzlf\" (UniqueName: \"kubernetes.io/projected/c14af255-6e29-4bea-978b-8b5bf6285bd8-kube-api-access-qkzlf\") pod \"redhat-operators-xgjxz\" (UID: \"c14af255-6e29-4bea-978b-8b5bf6285bd8\") " pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.864890 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c14af255-6e29-4bea-978b-8b5bf6285bd8-catalog-content\") pod \"redhat-operators-xgjxz\" (UID: \"c14af255-6e29-4bea-978b-8b5bf6285bd8\") " pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.864946 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c14af255-6e29-4bea-978b-8b5bf6285bd8-utilities\") pod \"redhat-operators-xgjxz\" (UID: \"c14af255-6e29-4bea-978b-8b5bf6285bd8\") " pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.866004 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c14af255-6e29-4bea-978b-8b5bf6285bd8-utilities\") pod \"redhat-operators-xgjxz\" (UID: \"c14af255-6e29-4bea-978b-8b5bf6285bd8\") " pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.866688 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c14af255-6e29-4bea-978b-8b5bf6285bd8-catalog-content\") pod \"redhat-operators-xgjxz\" (UID: \"c14af255-6e29-4bea-978b-8b5bf6285bd8\") " pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.904641 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkzlf\" (UniqueName: \"kubernetes.io/projected/c14af255-6e29-4bea-978b-8b5bf6285bd8-kube-api-access-qkzlf\") pod \"redhat-operators-xgjxz\" (UID: \"c14af255-6e29-4bea-978b-8b5bf6285bd8\") " pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.976311 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:44 crc kubenswrapper[4895]: I0129 08:43:44.983641 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-klns8" Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.011482 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.248634 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.608323 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:45 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:45 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:45 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.608685 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.664472 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b2r4l"] Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.673276 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xgjxz"] Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.753290 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b2r4l" event={"ID":"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124","Type":"ContainerStarted","Data":"9b2eebf7b9469dfd1328c1eff016335dd45c27a0246193f8403bde27e0c8e481"} Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.757441 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgjxz" event={"ID":"c14af255-6e29-4bea-978b-8b5bf6285bd8","Type":"ContainerStarted","Data":"9a35eb9ff0cf940df20282c3d9e681bac5874a890a9130b098f32f415ff577e4"} Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.760115 4895 generic.go:334] "Generic (PLEG): container finished" podID="3eb3534d-8971-42ec-8aaf-a970b786e631" containerID="dd4eb688794e2608d9b10ebda8769d4c36e8bbc55e1e3e10fb071808351b99fd" exitCode=0 Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.760175 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtxmb" event={"ID":"3eb3534d-8971-42ec-8aaf-a970b786e631","Type":"ContainerDied","Data":"dd4eb688794e2608d9b10ebda8769d4c36e8bbc55e1e3e10fb071808351b99fd"} Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.763390 4895 generic.go:334] "Generic (PLEG): container finished" podID="d42bddeb-f93a-4603-a38e-1016ca2b3a03" containerID="5a64b142a76b6c87a3c1406fae2a0cb677f914d8bd286e32ff3923312888e44c" exitCode=0 Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.763464 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" event={"ID":"d42bddeb-f93a-4603-a38e-1016ca2b3a03","Type":"ContainerDied","Data":"5a64b142a76b6c87a3c1406fae2a0cb677f914d8bd286e32ff3923312888e44c"} Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.772683 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" event={"ID":"766285e2-63c4-4073-9b24-d5fbf4b26638","Type":"ContainerStarted","Data":"9207aefda22773d043b889d245251c7bd738e5e93d3725c8ce56f6817e2828aa"} Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.773711 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:43:45 crc kubenswrapper[4895]: I0129 08:43:45.877376 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" podStartSLOduration=140.877347994 podStartE2EDuration="2m20.877347994s" podCreationTimestamp="2026-01-29 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:45.870357954 +0000 UTC m=+167.511866110" watchObservedRunningTime="2026-01-29 08:43:45.877347994 +0000 UTC m=+167.518856140" Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.020640 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.020716 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.609365 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:46 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:46 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:46 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.609767 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.775823 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.852789 4895 generic.go:334] "Generic (PLEG): container finished" podID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" containerID="abc17d4f507045f655e3172c35020c303d13c73c2bd99b061e186f41d5fabd99" exitCode=0 Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.858182 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b2r4l" event={"ID":"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124","Type":"ContainerDied","Data":"abc17d4f507045f655e3172c35020c303d13c73c2bd99b061e186f41d5fabd99"} Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.902127 4895 generic.go:334] "Generic (PLEG): container finished" podID="c14af255-6e29-4bea-978b-8b5bf6285bd8" containerID="c36abb045c9c540b16ea23489eb2efc73c37b7f07e125ede7b98dd4bf9e7659c" exitCode=0 Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.902213 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgjxz" event={"ID":"c14af255-6e29-4bea-978b-8b5bf6285bd8","Type":"ContainerDied","Data":"c36abb045c9c540b16ea23489eb2efc73c37b7f07e125ede7b98dd4bf9e7659c"} Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.917863 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a8ffe82a-3487-49fb-a79b-737dd5effd12","Type":"ContainerDied","Data":"c4faec8eea11769a905eb3a1beb50ca3b25e1e08edf6827d07356a99093af7aa"} Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.917928 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.917951 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4faec8eea11769a905eb3a1beb50ca3b25e1e08edf6827d07356a99093af7aa" Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.971460 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8ffe82a-3487-49fb-a79b-737dd5effd12-kube-api-access\") pod \"a8ffe82a-3487-49fb-a79b-737dd5effd12\" (UID: \"a8ffe82a-3487-49fb-a79b-737dd5effd12\") " Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.971674 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8ffe82a-3487-49fb-a79b-737dd5effd12-kubelet-dir\") pod \"a8ffe82a-3487-49fb-a79b-737dd5effd12\" (UID: \"a8ffe82a-3487-49fb-a79b-737dd5effd12\") " Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.978495 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8ffe82a-3487-49fb-a79b-737dd5effd12-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a8ffe82a-3487-49fb-a79b-737dd5effd12" (UID: "a8ffe82a-3487-49fb-a79b-737dd5effd12"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.978763 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 08:43:46 crc kubenswrapper[4895]: E0129 08:43:46.995330 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ffe82a-3487-49fb-a79b-737dd5effd12" containerName="pruner" Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.995378 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ffe82a-3487-49fb-a79b-737dd5effd12" containerName="pruner" Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.995548 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ffe82a-3487-49fb-a79b-737dd5effd12" containerName="pruner" Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.995574 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ffe82a-3487-49fb-a79b-737dd5effd12-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a8ffe82a-3487-49fb-a79b-737dd5effd12" (UID: "a8ffe82a-3487-49fb-a79b-737dd5effd12"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.996057 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 08:43:46 crc kubenswrapper[4895]: I0129 08:43:46.996168 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.001423 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.002215 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.074031 4895 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8ffe82a-3487-49fb-a79b-737dd5effd12-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.074071 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8ffe82a-3487-49fb-a79b-737dd5effd12-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.184818 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e1dc0f7-0913-4a36-8596-c55589d3ffe4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e1dc0f7-0913-4a36-8596-c55589d3ffe4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.184950 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e1dc0f7-0913-4a36-8596-c55589d3ffe4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e1dc0f7-0913-4a36-8596-c55589d3ffe4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.287234 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e1dc0f7-0913-4a36-8596-c55589d3ffe4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e1dc0f7-0913-4a36-8596-c55589d3ffe4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.287393 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e1dc0f7-0913-4a36-8596-c55589d3ffe4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e1dc0f7-0913-4a36-8596-c55589d3ffe4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.287543 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e1dc0f7-0913-4a36-8596-c55589d3ffe4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e1dc0f7-0913-4a36-8596-c55589d3ffe4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.397145 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e1dc0f7-0913-4a36-8596-c55589d3ffe4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e1dc0f7-0913-4a36-8596-c55589d3ffe4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.400275 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-852sw" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.735774 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.763672 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:47 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:47 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:47 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.763806 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.776952 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs\") pod \"network-metrics-daemon-g4585\" (UID: \"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\") " pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.844490 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d167bf78-4ea9-42d8-8ab6-6aaf234e102e-metrics-certs\") pod \"network-metrics-daemon-g4585\" (UID: \"d167bf78-4ea9-42d8-8ab6-6aaf234e102e\") " pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:47 crc kubenswrapper[4895]: I0129 08:43:47.846656 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g4585" Jan 29 08:43:48 crc kubenswrapper[4895]: I0129 08:43:48.480235 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" Jan 29 08:43:48 crc kubenswrapper[4895]: I0129 08:43:48.601054 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:48 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:48 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:48 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:48 crc kubenswrapper[4895]: I0129 08:43:48.601122 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:48 crc kubenswrapper[4895]: I0129 08:43:48.644675 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq496\" (UniqueName: \"kubernetes.io/projected/d42bddeb-f93a-4603-a38e-1016ca2b3a03-kube-api-access-hq496\") pod \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\" (UID: \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\") " Jan 29 08:43:48 crc kubenswrapper[4895]: I0129 08:43:48.644771 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d42bddeb-f93a-4603-a38e-1016ca2b3a03-secret-volume\") pod \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\" (UID: \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\") " Jan 29 08:43:48 crc kubenswrapper[4895]: I0129 08:43:48.644854 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d42bddeb-f93a-4603-a38e-1016ca2b3a03-config-volume\") pod \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\" (UID: \"d42bddeb-f93a-4603-a38e-1016ca2b3a03\") " Jan 29 08:43:48 crc kubenswrapper[4895]: I0129 08:43:48.646419 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d42bddeb-f93a-4603-a38e-1016ca2b3a03-config-volume" (OuterVolumeSpecName: "config-volume") pod "d42bddeb-f93a-4603-a38e-1016ca2b3a03" (UID: "d42bddeb-f93a-4603-a38e-1016ca2b3a03"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:43:48 crc kubenswrapper[4895]: I0129 08:43:48.749977 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d42bddeb-f93a-4603-a38e-1016ca2b3a03-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:48 crc kubenswrapper[4895]: I0129 08:43:48.791706 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42bddeb-f93a-4603-a38e-1016ca2b3a03-kube-api-access-hq496" (OuterVolumeSpecName: "kube-api-access-hq496") pod "d42bddeb-f93a-4603-a38e-1016ca2b3a03" (UID: "d42bddeb-f93a-4603-a38e-1016ca2b3a03"). InnerVolumeSpecName "kube-api-access-hq496". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:43:48 crc kubenswrapper[4895]: I0129 08:43:48.793494 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42bddeb-f93a-4603-a38e-1016ca2b3a03-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d42bddeb-f93a-4603-a38e-1016ca2b3a03" (UID: "d42bddeb-f93a-4603-a38e-1016ca2b3a03"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:43:48 crc kubenswrapper[4895]: I0129 08:43:48.859043 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d42bddeb-f93a-4603-a38e-1016ca2b3a03-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:48 crc kubenswrapper[4895]: I0129 08:43:48.859394 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq496\" (UniqueName: \"kubernetes.io/projected/d42bddeb-f93a-4603-a38e-1016ca2b3a03-kube-api-access-hq496\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:49 crc kubenswrapper[4895]: I0129 08:43:49.043429 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" event={"ID":"d42bddeb-f93a-4603-a38e-1016ca2b3a03","Type":"ContainerDied","Data":"f92fd92a2be0b4c2e85cd016c516f87e92da5ca8cd0a99fa1f241b628fabd679"} Jan 29 08:43:49 crc kubenswrapper[4895]: I0129 08:43:49.043519 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f92fd92a2be0b4c2e85cd016c516f87e92da5ca8cd0a99fa1f241b628fabd679" Jan 29 08:43:49 crc kubenswrapper[4895]: I0129 08:43:49.043521 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg" Jan 29 08:43:49 crc kubenswrapper[4895]: I0129 08:43:49.056871 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 08:43:49 crc kubenswrapper[4895]: I0129 08:43:49.095053 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-g4585"] Jan 29 08:43:49 crc kubenswrapper[4895]: I0129 08:43:49.601546 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:49 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:49 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:49 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:49 crc kubenswrapper[4895]: I0129 08:43:49.602148 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:49 crc kubenswrapper[4895]: I0129 08:43:49.990146 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:43:49 crc kubenswrapper[4895]: I0129 08:43:49.990220 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:43:49 crc kubenswrapper[4895]: I0129 08:43:49.991059 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:43:49 crc kubenswrapper[4895]: I0129 08:43:49.991083 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:43:50 crc kubenswrapper[4895]: I0129 08:43:50.061875 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-g4585" event={"ID":"d167bf78-4ea9-42d8-8ab6-6aaf234e102e","Type":"ContainerStarted","Data":"fee900ff556d002193fc8e78f656fb56adcb690c5968665c0b914abb655dde77"} Jan 29 08:43:50 crc kubenswrapper[4895]: I0129 08:43:50.073023 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e1dc0f7-0913-4a36-8596-c55589d3ffe4","Type":"ContainerStarted","Data":"0b613616ae28b0507043f61f3da029dd17da8faeff0e8aa3d789bb6552bedd07"} Jan 29 08:43:50 crc kubenswrapper[4895]: I0129 08:43:50.601948 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:50 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:50 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:50 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:50 crc kubenswrapper[4895]: I0129 08:43:50.602500 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:51 crc kubenswrapper[4895]: I0129 08:43:51.182735 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e1dc0f7-0913-4a36-8596-c55589d3ffe4","Type":"ContainerStarted","Data":"0b6acb98162cc751b117c3c46fc9928f794b0999ba71a22d8dfce2e88e469e42"} Jan 29 08:43:51 crc kubenswrapper[4895]: I0129 08:43:51.227536 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=5.22748476 podStartE2EDuration="5.22748476s" podCreationTimestamp="2026-01-29 08:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:51.219225505 +0000 UTC m=+172.860733671" watchObservedRunningTime="2026-01-29 08:43:51.22748476 +0000 UTC m=+172.868992936" Jan 29 08:43:51 crc kubenswrapper[4895]: I0129 08:43:51.267841 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-g4585" event={"ID":"d167bf78-4ea9-42d8-8ab6-6aaf234e102e","Type":"ContainerStarted","Data":"c0d7cb32abb774109ec6100a10fe009216ac249f0152b69fec499e9be75a4321"} Jan 29 08:43:51 crc kubenswrapper[4895]: I0129 08:43:51.284485 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-p8bg5_2ee8a736-336a-4b9a-a5d3-df1d4da6da62/cluster-samples-operator/0.log" Jan 29 08:43:51 crc kubenswrapper[4895]: I0129 08:43:51.284569 4895 generic.go:334] "Generic (PLEG): container finished" podID="2ee8a736-336a-4b9a-a5d3-df1d4da6da62" containerID="0095d557b6c0944628757dba1df78d6cebc2631de6732cb30c99d33f0f8cf55f" exitCode=2 Jan 29 08:43:51 crc kubenswrapper[4895]: I0129 08:43:51.284620 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5" event={"ID":"2ee8a736-336a-4b9a-a5d3-df1d4da6da62","Type":"ContainerDied","Data":"0095d557b6c0944628757dba1df78d6cebc2631de6732cb30c99d33f0f8cf55f"} Jan 29 08:43:51 crc kubenswrapper[4895]: I0129 08:43:51.285358 4895 scope.go:117] "RemoveContainer" containerID="0095d557b6c0944628757dba1df78d6cebc2631de6732cb30c99d33f0f8cf55f" Jan 29 08:43:51 crc kubenswrapper[4895]: I0129 08:43:51.601947 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:51 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:51 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:51 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:51 crc kubenswrapper[4895]: I0129 08:43:51.602411 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:52 crc kubenswrapper[4895]: I0129 08:43:52.269572 4895 patch_prober.go:28] interesting pod/console-f9d7485db-z5sff container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 29 08:43:52 crc kubenswrapper[4895]: I0129 08:43:52.269643 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-z5sff" podUID="ea9f8a45-3fdc-4780-a008-e0f77c99dffc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 29 08:43:52 crc kubenswrapper[4895]: I0129 08:43:52.310690 4895 generic.go:334] "Generic (PLEG): container finished" podID="5e1dc0f7-0913-4a36-8596-c55589d3ffe4" containerID="0b6acb98162cc751b117c3c46fc9928f794b0999ba71a22d8dfce2e88e469e42" exitCode=0 Jan 29 08:43:52 crc kubenswrapper[4895]: I0129 08:43:52.310800 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e1dc0f7-0913-4a36-8596-c55589d3ffe4","Type":"ContainerDied","Data":"0b6acb98162cc751b117c3c46fc9928f794b0999ba71a22d8dfce2e88e469e42"} Jan 29 08:43:52 crc kubenswrapper[4895]: I0129 08:43:52.331609 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-g4585" event={"ID":"d167bf78-4ea9-42d8-8ab6-6aaf234e102e","Type":"ContainerStarted","Data":"1a513937aec33610652965e72f8e0bd21e419cb33e8000a05c9462c80c7aca1d"} Jan 29 08:43:52 crc kubenswrapper[4895]: I0129 08:43:52.393183 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-p8bg5_2ee8a736-336a-4b9a-a5d3-df1d4da6da62/cluster-samples-operator/0.log" Jan 29 08:43:52 crc kubenswrapper[4895]: I0129 08:43:52.393306 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8bg5" event={"ID":"2ee8a736-336a-4b9a-a5d3-df1d4da6da62","Type":"ContainerStarted","Data":"3dca4cb75d0652677bbc29ac2646cce27f7a7643d96b32f161a4e12c2f87e55c"} Jan 29 08:43:52 crc kubenswrapper[4895]: I0129 08:43:52.465413 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-g4585" podStartSLOduration=148.465374583 podStartE2EDuration="2m28.465374583s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:43:52.437796414 +0000 UTC m=+174.079304570" watchObservedRunningTime="2026-01-29 08:43:52.465374583 +0000 UTC m=+174.106882729" Jan 29 08:43:52 crc kubenswrapper[4895]: I0129 08:43:52.605740 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:52 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:52 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:52 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:52 crc kubenswrapper[4895]: I0129 08:43:52.605817 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:52 crc kubenswrapper[4895]: I0129 08:43:52.623841 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:43:53 crc kubenswrapper[4895]: I0129 08:43:53.603520 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:53 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:53 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:53 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:53 crc kubenswrapper[4895]: I0129 08:43:53.603869 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:54 crc kubenswrapper[4895]: I0129 08:43:54.146853 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:43:54 crc kubenswrapper[4895]: I0129 08:43:54.270993 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e1dc0f7-0913-4a36-8596-c55589d3ffe4-kube-api-access\") pod \"5e1dc0f7-0913-4a36-8596-c55589d3ffe4\" (UID: \"5e1dc0f7-0913-4a36-8596-c55589d3ffe4\") " Jan 29 08:43:54 crc kubenswrapper[4895]: I0129 08:43:54.271093 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e1dc0f7-0913-4a36-8596-c55589d3ffe4-kubelet-dir\") pod \"5e1dc0f7-0913-4a36-8596-c55589d3ffe4\" (UID: \"5e1dc0f7-0913-4a36-8596-c55589d3ffe4\") " Jan 29 08:43:54 crc kubenswrapper[4895]: I0129 08:43:54.271437 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e1dc0f7-0913-4a36-8596-c55589d3ffe4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5e1dc0f7-0913-4a36-8596-c55589d3ffe4" (UID: "5e1dc0f7-0913-4a36-8596-c55589d3ffe4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:43:54 crc kubenswrapper[4895]: I0129 08:43:54.316509 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e1dc0f7-0913-4a36-8596-c55589d3ffe4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5e1dc0f7-0913-4a36-8596-c55589d3ffe4" (UID: "5e1dc0f7-0913-4a36-8596-c55589d3ffe4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:43:54 crc kubenswrapper[4895]: I0129 08:43:54.372335 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e1dc0f7-0913-4a36-8596-c55589d3ffe4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:54 crc kubenswrapper[4895]: I0129 08:43:54.372373 4895 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e1dc0f7-0913-4a36-8596-c55589d3ffe4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:54 crc kubenswrapper[4895]: I0129 08:43:54.454140 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e1dc0f7-0913-4a36-8596-c55589d3ffe4","Type":"ContainerDied","Data":"0b613616ae28b0507043f61f3da029dd17da8faeff0e8aa3d789bb6552bedd07"} Jan 29 08:43:54 crc kubenswrapper[4895]: I0129 08:43:54.454256 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b613616ae28b0507043f61f3da029dd17da8faeff0e8aa3d789bb6552bedd07" Jan 29 08:43:54 crc kubenswrapper[4895]: I0129 08:43:54.454348 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 08:43:54 crc kubenswrapper[4895]: I0129 08:43:54.604968 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:54 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 08:43:54 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:54 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:54 crc kubenswrapper[4895]: I0129 08:43:54.605056 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:55 crc kubenswrapper[4895]: I0129 08:43:55.608980 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 08:43:55 crc kubenswrapper[4895]: [+]has-synced ok Jan 29 08:43:55 crc kubenswrapper[4895]: [+]process-running ok Jan 29 08:43:55 crc kubenswrapper[4895]: healthz check failed Jan 29 08:43:55 crc kubenswrapper[4895]: I0129 08:43:55.609392 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:43:56 crc kubenswrapper[4895]: I0129 08:43:56.624383 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:56 crc kubenswrapper[4895]: I0129 08:43:56.671289 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-f64b6" Jan 29 08:43:58 crc kubenswrapper[4895]: I0129 08:43:58.499743 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 08:43:59 crc kubenswrapper[4895]: I0129 08:43:59.995165 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:43:59 crc kubenswrapper[4895]: I0129 08:43:59.995620 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:43:59 crc kubenswrapper[4895]: I0129 08:43:59.995736 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-w8vqq" Jan 29 08:43:59 crc kubenswrapper[4895]: I0129 08:43:59.997076 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"2353ca9c1453cfa31debb0617fc987d1aed5ac868566b262915f1a057f75c03c"} pod="openshift-console/downloads-7954f5f757-w8vqq" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 08:43:59 crc kubenswrapper[4895]: I0129 08:43:59.998356 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" containerID="cri-o://2353ca9c1453cfa31debb0617fc987d1aed5ac868566b262915f1a057f75c03c" gracePeriod=2 Jan 29 08:43:59 crc kubenswrapper[4895]: I0129 08:43:59.998251 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:43:59 crc kubenswrapper[4895]: I0129 08:43:59.998693 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:43:59 crc kubenswrapper[4895]: I0129 08:43:59.999203 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:43:59 crc kubenswrapper[4895]: I0129 08:43:59.999264 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:44:00 crc kubenswrapper[4895]: I0129 08:44:00.701524 4895 generic.go:334] "Generic (PLEG): container finished" podID="0e8ec468-a940-452a-975b-60a761b9f44f" containerID="2353ca9c1453cfa31debb0617fc987d1aed5ac868566b262915f1a057f75c03c" exitCode=0 Jan 29 08:44:00 crc kubenswrapper[4895]: I0129 08:44:00.701591 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-w8vqq" event={"ID":"0e8ec468-a940-452a-975b-60a761b9f44f","Type":"ContainerDied","Data":"2353ca9c1453cfa31debb0617fc987d1aed5ac868566b262915f1a057f75c03c"} Jan 29 08:44:02 crc kubenswrapper[4895]: I0129 08:44:02.276817 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:44:02 crc kubenswrapper[4895]: I0129 08:44:02.282665 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:44:03 crc kubenswrapper[4895]: I0129 08:44:03.890031 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:44:09 crc kubenswrapper[4895]: I0129 08:44:09.991237 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:44:09 crc kubenswrapper[4895]: I0129 08:44:09.991832 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:44:12 crc kubenswrapper[4895]: I0129 08:44:12.357598 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c82ws" Jan 29 08:44:16 crc kubenswrapper[4895]: I0129 08:44:16.021227 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:44:16 crc kubenswrapper[4895]: I0129 08:44:16.021550 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:44:20 crc kubenswrapper[4895]: I0129 08:44:20.087276 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:44:20 crc kubenswrapper[4895]: I0129 08:44:20.087899 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.571899 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 08:44:21 crc kubenswrapper[4895]: E0129 08:44:21.572771 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42bddeb-f93a-4603-a38e-1016ca2b3a03" containerName="collect-profiles" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.572788 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42bddeb-f93a-4603-a38e-1016ca2b3a03" containerName="collect-profiles" Jan 29 08:44:21 crc kubenswrapper[4895]: E0129 08:44:21.572808 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1dc0f7-0913-4a36-8596-c55589d3ffe4" containerName="pruner" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.572814 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1dc0f7-0913-4a36-8596-c55589d3ffe4" containerName="pruner" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.572929 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42bddeb-f93a-4603-a38e-1016ca2b3a03" containerName="collect-profiles" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.572948 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e1dc0f7-0913-4a36-8596-c55589d3ffe4" containerName="pruner" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.573317 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.579239 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.579248 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.585990 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.631037 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1385efb-894f-4ef0-b3b7-ded0d49e9e2a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a1385efb-894f-4ef0-b3b7-ded0d49e9e2a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.631102 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1385efb-894f-4ef0-b3b7-ded0d49e9e2a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a1385efb-894f-4ef0-b3b7-ded0d49e9e2a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.734707 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1385efb-894f-4ef0-b3b7-ded0d49e9e2a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a1385efb-894f-4ef0-b3b7-ded0d49e9e2a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.734794 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1385efb-894f-4ef0-b3b7-ded0d49e9e2a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a1385efb-894f-4ef0-b3b7-ded0d49e9e2a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.734940 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1385efb-894f-4ef0-b3b7-ded0d49e9e2a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a1385efb-894f-4ef0-b3b7-ded0d49e9e2a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.769058 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1385efb-894f-4ef0-b3b7-ded0d49e9e2a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a1385efb-894f-4ef0-b3b7-ded0d49e9e2a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:44:21 crc kubenswrapper[4895]: I0129 08:44:21.909498 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:44:25 crc kubenswrapper[4895]: I0129 08:44:25.768650 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 08:44:25 crc kubenswrapper[4895]: I0129 08:44:25.771010 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:44:25 crc kubenswrapper[4895]: I0129 08:44:25.778289 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 08:44:25 crc kubenswrapper[4895]: I0129 08:44:25.799359 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e4be033f-c03a-4c59-897e-3b03190e597a-var-lock\") pod \"installer-9-crc\" (UID: \"e4be033f-c03a-4c59-897e-3b03190e597a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:44:25 crc kubenswrapper[4895]: I0129 08:44:25.800284 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4be033f-c03a-4c59-897e-3b03190e597a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e4be033f-c03a-4c59-897e-3b03190e597a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:44:25 crc kubenswrapper[4895]: I0129 08:44:25.800427 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4be033f-c03a-4c59-897e-3b03190e597a-kube-api-access\") pod \"installer-9-crc\" (UID: \"e4be033f-c03a-4c59-897e-3b03190e597a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:44:25 crc kubenswrapper[4895]: I0129 08:44:25.902123 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4be033f-c03a-4c59-897e-3b03190e597a-kube-api-access\") pod \"installer-9-crc\" (UID: \"e4be033f-c03a-4c59-897e-3b03190e597a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:44:25 crc kubenswrapper[4895]: I0129 08:44:25.902231 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e4be033f-c03a-4c59-897e-3b03190e597a-var-lock\") pod \"installer-9-crc\" (UID: \"e4be033f-c03a-4c59-897e-3b03190e597a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:44:25 crc kubenswrapper[4895]: I0129 08:44:25.902281 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4be033f-c03a-4c59-897e-3b03190e597a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e4be033f-c03a-4c59-897e-3b03190e597a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:44:25 crc kubenswrapper[4895]: I0129 08:44:25.902369 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e4be033f-c03a-4c59-897e-3b03190e597a-var-lock\") pod \"installer-9-crc\" (UID: \"e4be033f-c03a-4c59-897e-3b03190e597a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:44:25 crc kubenswrapper[4895]: I0129 08:44:25.902385 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4be033f-c03a-4c59-897e-3b03190e597a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e4be033f-c03a-4c59-897e-3b03190e597a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:44:25 crc kubenswrapper[4895]: I0129 08:44:25.944452 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4be033f-c03a-4c59-897e-3b03190e597a-kube-api-access\") pod \"installer-9-crc\" (UID: \"e4be033f-c03a-4c59-897e-3b03190e597a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:44:26 crc kubenswrapper[4895]: I0129 08:44:26.121148 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:44:27 crc kubenswrapper[4895]: E0129 08:44:27.998971 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 08:44:28 crc kubenswrapper[4895]: E0129 08:44:27.999604 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkzlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xgjxz_openshift-marketplace(c14af255-6e29-4bea-978b-8b5bf6285bd8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:44:28 crc kubenswrapper[4895]: E0129 08:44:28.000874 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xgjxz" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" Jan 29 08:44:29 crc kubenswrapper[4895]: E0129 08:44:29.294840 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xgjxz" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" Jan 29 08:44:29 crc kubenswrapper[4895]: E0129 08:44:29.370910 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 08:44:29 crc kubenswrapper[4895]: E0129 08:44:29.371402 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrgzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-85kbq_openshift-marketplace(bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:44:29 crc kubenswrapper[4895]: E0129 08:44:29.372837 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-85kbq" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" Jan 29 08:44:29 crc kubenswrapper[4895]: E0129 08:44:29.398879 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 08:44:29 crc kubenswrapper[4895]: E0129 08:44:29.399167 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j8tnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-b2r4l_openshift-marketplace(a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:44:29 crc kubenswrapper[4895]: E0129 08:44:29.402430 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-b2r4l" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" Jan 29 08:44:29 crc kubenswrapper[4895]: I0129 08:44:29.991593 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:44:29 crc kubenswrapper[4895]: I0129 08:44:29.991683 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:44:32 crc kubenswrapper[4895]: E0129 08:44:32.053047 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-85kbq" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" Jan 29 08:44:32 crc kubenswrapper[4895]: E0129 08:44:32.053262 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-b2r4l" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" Jan 29 08:44:32 crc kubenswrapper[4895]: E0129 08:44:32.375371 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 08:44:32 crc kubenswrapper[4895]: E0129 08:44:32.375612 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m99kq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l25mt_openshift-marketplace(886dfa02-5b87-4bdf-9bf5-fcd914ff2afb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:44:32 crc kubenswrapper[4895]: E0129 08:44:32.377668 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-l25mt" podUID="886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" Jan 29 08:44:33 crc kubenswrapper[4895]: E0129 08:44:33.716895 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l25mt" podUID="886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" Jan 29 08:44:33 crc kubenswrapper[4895]: E0129 08:44:33.801271 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 08:44:33 crc kubenswrapper[4895]: E0129 08:44:33.801461 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-khfjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wtxmb_openshift-marketplace(3eb3534d-8971-42ec-8aaf-a970b786e631): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:44:33 crc kubenswrapper[4895]: E0129 08:44:33.802774 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wtxmb" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" Jan 29 08:44:33 crc kubenswrapper[4895]: E0129 08:44:33.819320 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 08:44:33 crc kubenswrapper[4895]: E0129 08:44:33.819563 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztwrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-nfrjk_openshift-marketplace(0349d46c-bf39-4ba0-99be-22445866386b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:44:33 crc kubenswrapper[4895]: E0129 08:44:33.821410 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-nfrjk" podUID="0349d46c-bf39-4ba0-99be-22445866386b" Jan 29 08:44:33 crc kubenswrapper[4895]: E0129 08:44:33.857728 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 08:44:33 crc kubenswrapper[4895]: E0129 08:44:33.858283 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rtsfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c8p6f_openshift-marketplace(a0b31a89-9993-4996-8b19-961efcb757ed): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:44:33 crc kubenswrapper[4895]: E0129 08:44:33.859495 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-c8p6f" podUID="a0b31a89-9993-4996-8b19-961efcb757ed" Jan 29 08:44:33 crc kubenswrapper[4895]: E0129 08:44:33.895076 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 08:44:33 crc kubenswrapper[4895]: E0129 08:44:33.895323 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkl55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2d5f8_openshift-marketplace(b8e96926-7c32-4a64-b37d-342a66d925ea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:44:33 crc kubenswrapper[4895]: E0129 08:44:33.896721 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2d5f8" podUID="b8e96926-7c32-4a64-b37d-342a66d925ea" Jan 29 08:44:34 crc kubenswrapper[4895]: I0129 08:44:34.089762 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-w8vqq" event={"ID":"0e8ec468-a940-452a-975b-60a761b9f44f","Type":"ContainerStarted","Data":"ac992f02c64bfa3917796c8fd95b11a2168e67ad7b5a1b8de304bc1958e0ff68"} Jan 29 08:44:34 crc kubenswrapper[4895]: I0129 08:44:34.090663 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:44:34 crc kubenswrapper[4895]: I0129 08:44:34.090741 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:44:34 crc kubenswrapper[4895]: E0129 08:44:34.092386 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c8p6f" podUID="a0b31a89-9993-4996-8b19-961efcb757ed" Jan 29 08:44:34 crc kubenswrapper[4895]: E0129 08:44:34.092818 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-nfrjk" podUID="0349d46c-bf39-4ba0-99be-22445866386b" Jan 29 08:44:34 crc kubenswrapper[4895]: E0129 08:44:34.093292 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wtxmb" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" Jan 29 08:44:34 crc kubenswrapper[4895]: E0129 08:44:34.093727 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2d5f8" podUID="b8e96926-7c32-4a64-b37d-342a66d925ea" Jan 29 08:44:34 crc kubenswrapper[4895]: I0129 08:44:34.227289 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 08:44:34 crc kubenswrapper[4895]: I0129 08:44:34.275000 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 08:44:35 crc kubenswrapper[4895]: I0129 08:44:35.097233 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a1385efb-894f-4ef0-b3b7-ded0d49e9e2a","Type":"ContainerStarted","Data":"d6ead5a9da047a84c7355e7eef840a930d04d0eb3310b1532a21eee6425148ac"} Jan 29 08:44:35 crc kubenswrapper[4895]: I0129 08:44:35.098747 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e4be033f-c03a-4c59-897e-3b03190e597a","Type":"ContainerStarted","Data":"457594cb91617af3504a759418b09f7182e95630585fde49c90c02db923e26f6"} Jan 29 08:44:35 crc kubenswrapper[4895]: I0129 08:44:35.099020 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-w8vqq" Jan 29 08:44:35 crc kubenswrapper[4895]: I0129 08:44:35.099624 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:44:35 crc kubenswrapper[4895]: I0129 08:44:35.099688 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:44:36 crc kubenswrapper[4895]: I0129 08:44:36.109359 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e4be033f-c03a-4c59-897e-3b03190e597a","Type":"ContainerStarted","Data":"28bfb8db8ac08f06b2640d6de166d2809e3c8ea3fac597f8ba213163531d5d62"} Jan 29 08:44:36 crc kubenswrapper[4895]: I0129 08:44:36.113884 4895 generic.go:334] "Generic (PLEG): container finished" podID="a1385efb-894f-4ef0-b3b7-ded0d49e9e2a" containerID="86caba9fdaaf8b87bb4597df72c4f859b0f7caf2af969b1997194a6c1d59977f" exitCode=0 Jan 29 08:44:36 crc kubenswrapper[4895]: I0129 08:44:36.113960 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a1385efb-894f-4ef0-b3b7-ded0d49e9e2a","Type":"ContainerDied","Data":"86caba9fdaaf8b87bb4597df72c4f859b0f7caf2af969b1997194a6c1d59977f"} Jan 29 08:44:36 crc kubenswrapper[4895]: I0129 08:44:36.114752 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:44:36 crc kubenswrapper[4895]: I0129 08:44:36.114836 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:44:36 crc kubenswrapper[4895]: I0129 08:44:36.126228 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=11.126204056 podStartE2EDuration="11.126204056s" podCreationTimestamp="2026-01-29 08:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:44:36.12514749 +0000 UTC m=+217.766655636" watchObservedRunningTime="2026-01-29 08:44:36.126204056 +0000 UTC m=+217.767712202" Jan 29 08:44:37 crc kubenswrapper[4895]: I0129 08:44:37.391116 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:44:37 crc kubenswrapper[4895]: I0129 08:44:37.537308 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1385efb-894f-4ef0-b3b7-ded0d49e9e2a-kube-api-access\") pod \"a1385efb-894f-4ef0-b3b7-ded0d49e9e2a\" (UID: \"a1385efb-894f-4ef0-b3b7-ded0d49e9e2a\") " Jan 29 08:44:37 crc kubenswrapper[4895]: I0129 08:44:37.537423 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1385efb-894f-4ef0-b3b7-ded0d49e9e2a-kubelet-dir\") pod \"a1385efb-894f-4ef0-b3b7-ded0d49e9e2a\" (UID: \"a1385efb-894f-4ef0-b3b7-ded0d49e9e2a\") " Jan 29 08:44:37 crc kubenswrapper[4895]: I0129 08:44:37.537576 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1385efb-894f-4ef0-b3b7-ded0d49e9e2a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a1385efb-894f-4ef0-b3b7-ded0d49e9e2a" (UID: "a1385efb-894f-4ef0-b3b7-ded0d49e9e2a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:44:37 crc kubenswrapper[4895]: I0129 08:44:37.537768 4895 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1385efb-894f-4ef0-b3b7-ded0d49e9e2a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:37 crc kubenswrapper[4895]: I0129 08:44:37.544580 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1385efb-894f-4ef0-b3b7-ded0d49e9e2a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a1385efb-894f-4ef0-b3b7-ded0d49e9e2a" (UID: "a1385efb-894f-4ef0-b3b7-ded0d49e9e2a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:44:37 crc kubenswrapper[4895]: I0129 08:44:37.638829 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1385efb-894f-4ef0-b3b7-ded0d49e9e2a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:38 crc kubenswrapper[4895]: I0129 08:44:38.130677 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a1385efb-894f-4ef0-b3b7-ded0d49e9e2a","Type":"ContainerDied","Data":"d6ead5a9da047a84c7355e7eef840a930d04d0eb3310b1532a21eee6425148ac"} Jan 29 08:44:38 crc kubenswrapper[4895]: I0129 08:44:38.130736 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6ead5a9da047a84c7355e7eef840a930d04d0eb3310b1532a21eee6425148ac" Jan 29 08:44:38 crc kubenswrapper[4895]: I0129 08:44:38.131206 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 08:44:39 crc kubenswrapper[4895]: I0129 08:44:39.991030 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:44:39 crc kubenswrapper[4895]: I0129 08:44:39.991420 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:44:39 crc kubenswrapper[4895]: I0129 08:44:39.991034 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:44:39 crc kubenswrapper[4895]: I0129 08:44:39.991544 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:44:46 crc kubenswrapper[4895]: I0129 08:44:46.020849 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:44:46 crc kubenswrapper[4895]: I0129 08:44:46.021782 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:44:46 crc kubenswrapper[4895]: I0129 08:44:46.021845 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:44:46 crc kubenswrapper[4895]: I0129 08:44:46.022716 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3"} pod="openshift-machine-config-operator/machine-config-daemon-z82hk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:44:46 crc kubenswrapper[4895]: I0129 08:44:46.022780 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" containerID="cri-o://fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3" gracePeriod=600 Jan 29 08:44:47 crc kubenswrapper[4895]: I0129 08:44:47.205642 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerID="fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3" exitCode=0 Jan 29 08:44:47 crc kubenswrapper[4895]: I0129 08:44:47.205731 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerDied","Data":"fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3"} Jan 29 08:44:49 crc kubenswrapper[4895]: I0129 08:44:49.990288 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:44:49 crc kubenswrapper[4895]: I0129 08:44:49.990734 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:44:49 crc kubenswrapper[4895]: I0129 08:44:49.990432 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-w8vqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 08:44:49 crc kubenswrapper[4895]: I0129 08:44:49.991140 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w8vqq" podUID="0e8ec468-a940-452a-975b-60a761b9f44f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 08:44:51 crc kubenswrapper[4895]: I0129 08:44:51.298185 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgjxz" event={"ID":"c14af255-6e29-4bea-978b-8b5bf6285bd8","Type":"ContainerStarted","Data":"a62748f3db06ade103a89e275c37447e19f3e8562aa817d784660a83698afb72"} Jan 29 08:44:51 crc kubenswrapper[4895]: I0129 08:44:51.300668 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerStarted","Data":"f9f292e90b87004fc0704882047c33934c7850e8be2171a5825e64c3cef92531"} Jan 29 08:44:54 crc kubenswrapper[4895]: I0129 08:44:54.333617 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2d5f8" event={"ID":"b8e96926-7c32-4a64-b37d-342a66d925ea","Type":"ContainerStarted","Data":"ef42c9becf063353673a42eaaeee135b5b14ea6726043a5c03d77c11e7f2cacf"} Jan 29 08:44:54 crc kubenswrapper[4895]: I0129 08:44:54.359834 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b2r4l" event={"ID":"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124","Type":"ContainerStarted","Data":"a99acb26dd7a671163f81fba56de147fac20be8aaf15694217cf082ea9dc7cf2"} Jan 29 08:44:54 crc kubenswrapper[4895]: I0129 08:44:54.383037 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l25mt" event={"ID":"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb","Type":"ContainerStarted","Data":"89a12fb719283e9a6442676644fc65f122f2b2af54832d067ae6ccd2f9abe056"} Jan 29 08:44:54 crc kubenswrapper[4895]: I0129 08:44:54.384848 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85kbq" event={"ID":"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8","Type":"ContainerStarted","Data":"eeda50a9c75e1244d10cf88d1300fc37b335b0cab8d1ceeceae25b8ed91758ca"} Jan 29 08:44:54 crc kubenswrapper[4895]: I0129 08:44:54.386837 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtxmb" event={"ID":"3eb3534d-8971-42ec-8aaf-a970b786e631","Type":"ContainerStarted","Data":"30795ed371b20b52d2b3b58bd617c01027e48def6a203658f8f7ebff2bc2b498"} Jan 29 08:44:54 crc kubenswrapper[4895]: I0129 08:44:54.393004 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8p6f" event={"ID":"a0b31a89-9993-4996-8b19-961efcb757ed","Type":"ContainerStarted","Data":"788c15e653a5be425c1f42b480c735de35ec163fe9c0b3fee094f0972d351e44"} Jan 29 08:44:54 crc kubenswrapper[4895]: I0129 08:44:54.399236 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nfrjk" event={"ID":"0349d46c-bf39-4ba0-99be-22445866386b","Type":"ContainerStarted","Data":"cc9d868e940c05ebacc35e99eace45eb04a1a5f75d4d2b59d9bd84a3c1111722"} Jan 29 08:44:55 crc kubenswrapper[4895]: I0129 08:44:55.406365 4895 generic.go:334] "Generic (PLEG): container finished" podID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" containerID="eeda50a9c75e1244d10cf88d1300fc37b335b0cab8d1ceeceae25b8ed91758ca" exitCode=0 Jan 29 08:44:55 crc kubenswrapper[4895]: I0129 08:44:55.406460 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85kbq" event={"ID":"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8","Type":"ContainerDied","Data":"eeda50a9c75e1244d10cf88d1300fc37b335b0cab8d1ceeceae25b8ed91758ca"} Jan 29 08:44:55 crc kubenswrapper[4895]: I0129 08:44:55.409674 4895 generic.go:334] "Generic (PLEG): container finished" podID="3eb3534d-8971-42ec-8aaf-a970b786e631" containerID="30795ed371b20b52d2b3b58bd617c01027e48def6a203658f8f7ebff2bc2b498" exitCode=0 Jan 29 08:44:55 crc kubenswrapper[4895]: I0129 08:44:55.409748 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtxmb" event={"ID":"3eb3534d-8971-42ec-8aaf-a970b786e631","Type":"ContainerDied","Data":"30795ed371b20b52d2b3b58bd617c01027e48def6a203658f8f7ebff2bc2b498"} Jan 29 08:44:55 crc kubenswrapper[4895]: I0129 08:44:55.412894 4895 generic.go:334] "Generic (PLEG): container finished" podID="a0b31a89-9993-4996-8b19-961efcb757ed" containerID="788c15e653a5be425c1f42b480c735de35ec163fe9c0b3fee094f0972d351e44" exitCode=0 Jan 29 08:44:55 crc kubenswrapper[4895]: I0129 08:44:55.412969 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8p6f" event={"ID":"a0b31a89-9993-4996-8b19-961efcb757ed","Type":"ContainerDied","Data":"788c15e653a5be425c1f42b480c735de35ec163fe9c0b3fee094f0972d351e44"} Jan 29 08:44:56 crc kubenswrapper[4895]: I0129 08:44:56.422161 4895 generic.go:334] "Generic (PLEG): container finished" podID="0349d46c-bf39-4ba0-99be-22445866386b" containerID="cc9d868e940c05ebacc35e99eace45eb04a1a5f75d4d2b59d9bd84a3c1111722" exitCode=0 Jan 29 08:44:56 crc kubenswrapper[4895]: I0129 08:44:56.422267 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nfrjk" event={"ID":"0349d46c-bf39-4ba0-99be-22445866386b","Type":"ContainerDied","Data":"cc9d868e940c05ebacc35e99eace45eb04a1a5f75d4d2b59d9bd84a3c1111722"} Jan 29 08:44:56 crc kubenswrapper[4895]: I0129 08:44:56.429615 4895 generic.go:334] "Generic (PLEG): container finished" podID="b8e96926-7c32-4a64-b37d-342a66d925ea" containerID="ef42c9becf063353673a42eaaeee135b5b14ea6726043a5c03d77c11e7f2cacf" exitCode=0 Jan 29 08:44:56 crc kubenswrapper[4895]: I0129 08:44:56.429664 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2d5f8" event={"ID":"b8e96926-7c32-4a64-b37d-342a66d925ea","Type":"ContainerDied","Data":"ef42c9becf063353673a42eaaeee135b5b14ea6726043a5c03d77c11e7f2cacf"} Jan 29 08:44:56 crc kubenswrapper[4895]: I0129 08:44:56.438596 4895 generic.go:334] "Generic (PLEG): container finished" podID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" containerID="a99acb26dd7a671163f81fba56de147fac20be8aaf15694217cf082ea9dc7cf2" exitCode=0 Jan 29 08:44:56 crc kubenswrapper[4895]: I0129 08:44:56.438703 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b2r4l" event={"ID":"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124","Type":"ContainerDied","Data":"a99acb26dd7a671163f81fba56de147fac20be8aaf15694217cf082ea9dc7cf2"} Jan 29 08:44:56 crc kubenswrapper[4895]: I0129 08:44:56.442021 4895 generic.go:334] "Generic (PLEG): container finished" podID="886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" containerID="89a12fb719283e9a6442676644fc65f122f2b2af54832d067ae6ccd2f9abe056" exitCode=0 Jan 29 08:44:56 crc kubenswrapper[4895]: I0129 08:44:56.442116 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l25mt" event={"ID":"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb","Type":"ContainerDied","Data":"89a12fb719283e9a6442676644fc65f122f2b2af54832d067ae6ccd2f9abe056"} Jan 29 08:44:56 crc kubenswrapper[4895]: I0129 08:44:56.446357 4895 generic.go:334] "Generic (PLEG): container finished" podID="c14af255-6e29-4bea-978b-8b5bf6285bd8" containerID="a62748f3db06ade103a89e275c37447e19f3e8562aa817d784660a83698afb72" exitCode=0 Jan 29 08:44:56 crc kubenswrapper[4895]: I0129 08:44:56.446425 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgjxz" event={"ID":"c14af255-6e29-4bea-978b-8b5bf6285bd8","Type":"ContainerDied","Data":"a62748f3db06ade103a89e275c37447e19f3e8562aa817d784660a83698afb72"} Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.014774 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-w8vqq" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.174511 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp"] Jan 29 08:45:00 crc kubenswrapper[4895]: E0129 08:45:00.174932 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1385efb-894f-4ef0-b3b7-ded0d49e9e2a" containerName="pruner" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.174948 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1385efb-894f-4ef0-b3b7-ded0d49e9e2a" containerName="pruner" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.175073 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1385efb-894f-4ef0-b3b7-ded0d49e9e2a" containerName="pruner" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.175725 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.181262 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.182078 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.185203 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp"] Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.323409 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-config-volume\") pod \"collect-profiles-29494605-qhdtp\" (UID: \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.323486 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d455t\" (UniqueName: \"kubernetes.io/projected/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-kube-api-access-d455t\") pod \"collect-profiles-29494605-qhdtp\" (UID: \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.323623 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-secret-volume\") pod \"collect-profiles-29494605-qhdtp\" (UID: \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.425344 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-secret-volume\") pod \"collect-profiles-29494605-qhdtp\" (UID: \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.426685 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-config-volume\") pod \"collect-profiles-29494605-qhdtp\" (UID: \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.426716 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d455t\" (UniqueName: \"kubernetes.io/projected/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-kube-api-access-d455t\") pod \"collect-profiles-29494605-qhdtp\" (UID: \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.428873 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-config-volume\") pod \"collect-profiles-29494605-qhdtp\" (UID: \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.436716 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-secret-volume\") pod \"collect-profiles-29494605-qhdtp\" (UID: \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.444212 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d455t\" (UniqueName: \"kubernetes.io/projected/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-kube-api-access-d455t\") pod \"collect-profiles-29494605-qhdtp\" (UID: \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.469995 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtxmb" event={"ID":"3eb3534d-8971-42ec-8aaf-a970b786e631","Type":"ContainerStarted","Data":"656df0532adeb13cf6c9afb1413908ef0292d26089195e1975a71e8d8ff4a60d"} Jan 29 08:45:00 crc kubenswrapper[4895]: I0129 08:45:00.498469 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" Jan 29 08:45:01 crc kubenswrapper[4895]: I0129 08:45:01.496013 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wtxmb" podStartSLOduration=4.765844134 podStartE2EDuration="1m18.495994355s" podCreationTimestamp="2026-01-29 08:43:43 +0000 UTC" firstStartedPulling="2026-01-29 08:43:45.792183699 +0000 UTC m=+167.433691845" lastFinishedPulling="2026-01-29 08:44:59.52233392 +0000 UTC m=+241.163842066" observedRunningTime="2026-01-29 08:45:01.492841599 +0000 UTC m=+243.134349745" watchObservedRunningTime="2026-01-29 08:45:01.495994355 +0000 UTC m=+243.137502501" Jan 29 08:45:03 crc kubenswrapper[4895]: I0129 08:45:03.562563 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp"] Jan 29 08:45:03 crc kubenswrapper[4895]: I0129 08:45:03.786401 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:45:03 crc kubenswrapper[4895]: I0129 08:45:03.786820 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.516350 4895 generic.go:334] "Generic (PLEG): container finished" podID="2cf95bf0-4949-44c6-9387-7fa2d4cf2b56" containerID="9803624622e3994dec96327f22fb99716cccf859b6cac0b1de5e84ff5e3b9a16" exitCode=0 Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.516450 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" event={"ID":"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56","Type":"ContainerDied","Data":"9803624622e3994dec96327f22fb99716cccf859b6cac0b1de5e84ff5e3b9a16"} Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.516500 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" event={"ID":"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56","Type":"ContainerStarted","Data":"283dacf6daed7b832a5422e9b82143a4d27d0a1ea9b6c6f1a1545892d6e8ae8b"} Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.519745 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nfrjk" event={"ID":"0349d46c-bf39-4ba0-99be-22445866386b","Type":"ContainerStarted","Data":"1c9eab74cb14f86f36101dc1b05f62f97b33a04cda6ce5013d756e615e14ee74"} Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.526674 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2d5f8" event={"ID":"b8e96926-7c32-4a64-b37d-342a66d925ea","Type":"ContainerStarted","Data":"ccc2f3063f3215250c490b43ab7f2ee8d0e8a996fcc377d7ffd5e55e7d09dc26"} Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.530665 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b2r4l" event={"ID":"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124","Type":"ContainerStarted","Data":"53db58cfb6c74beea66833f15eb593bc28e54bf9049caad3e18c286b6025b9f2"} Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.533090 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l25mt" event={"ID":"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb","Type":"ContainerStarted","Data":"3588427658f554bd25430880d41b9d6d518bec086b2fe03d3bab1d50b0d1607a"} Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.536353 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85kbq" event={"ID":"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8","Type":"ContainerStarted","Data":"09e45f33a61901d71551ee11fd1f805ceedbe67eb4e47d1515816e4aa4aceef6"} Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.538788 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgjxz" event={"ID":"c14af255-6e29-4bea-978b-8b5bf6285bd8","Type":"ContainerStarted","Data":"1ab528a4a5914d257629425f8b1afc78c2957ecb174b95287edf6687fe113861"} Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.541392 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8p6f" event={"ID":"a0b31a89-9993-4996-8b19-961efcb757ed","Type":"ContainerStarted","Data":"e3b43f532aebe92c13578a1ffc6b49bc0fde97c31ecbf38278971fb211e925ee"} Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.581692 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b2r4l" podStartSLOduration=4.264416766 podStartE2EDuration="1m20.581670034s" podCreationTimestamp="2026-01-29 08:43:44 +0000 UTC" firstStartedPulling="2026-01-29 08:43:46.873019704 +0000 UTC m=+168.514527850" lastFinishedPulling="2026-01-29 08:45:03.190272972 +0000 UTC m=+244.831781118" observedRunningTime="2026-01-29 08:45:04.580480664 +0000 UTC m=+246.221988820" watchObservedRunningTime="2026-01-29 08:45:04.581670034 +0000 UTC m=+246.223178190" Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.615960 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xgjxz" podStartSLOduration=4.157518128 podStartE2EDuration="1m20.61593996s" podCreationTimestamp="2026-01-29 08:43:44 +0000 UTC" firstStartedPulling="2026-01-29 08:43:46.906026042 +0000 UTC m=+168.547534188" lastFinishedPulling="2026-01-29 08:45:03.364447874 +0000 UTC m=+245.005956020" observedRunningTime="2026-01-29 08:45:04.611628815 +0000 UTC m=+246.253136971" watchObservedRunningTime="2026-01-29 08:45:04.61593996 +0000 UTC m=+246.257448106" Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.622889 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.622957 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.646043 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2d5f8" podStartSLOduration=4.164511255 podStartE2EDuration="1m23.646019424s" podCreationTimestamp="2026-01-29 08:43:41 +0000 UTC" firstStartedPulling="2026-01-29 08:43:43.640704557 +0000 UTC m=+165.282212703" lastFinishedPulling="2026-01-29 08:45:03.122212726 +0000 UTC m=+244.763720872" observedRunningTime="2026-01-29 08:45:04.644747101 +0000 UTC m=+246.286255267" watchObservedRunningTime="2026-01-29 08:45:04.646019424 +0000 UTC m=+246.287527570" Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.672432 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l25mt" podStartSLOduration=5.097956313 podStartE2EDuration="1m23.672410774s" podCreationTimestamp="2026-01-29 08:43:41 +0000 UTC" firstStartedPulling="2026-01-29 08:43:44.707177741 +0000 UTC m=+166.348685887" lastFinishedPulling="2026-01-29 08:45:03.281632202 +0000 UTC m=+244.923140348" observedRunningTime="2026-01-29 08:45:04.669589959 +0000 UTC m=+246.311098115" watchObservedRunningTime="2026-01-29 08:45:04.672410774 +0000 UTC m=+246.313918920" Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.702591 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c8p6f" podStartSLOduration=5.452194432 podStartE2EDuration="1m23.702566941s" podCreationTimestamp="2026-01-29 08:43:41 +0000 UTC" firstStartedPulling="2026-01-29 08:43:44.73401466 +0000 UTC m=+166.375522806" lastFinishedPulling="2026-01-29 08:45:02.984387169 +0000 UTC m=+244.625895315" observedRunningTime="2026-01-29 08:45:04.69717968 +0000 UTC m=+246.338687846" watchObservedRunningTime="2026-01-29 08:45:04.702566941 +0000 UTC m=+246.344075087" Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.723023 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-85kbq" podStartSLOduration=3.072431102 podStartE2EDuration="1m21.722995579s" podCreationTimestamp="2026-01-29 08:43:43 +0000 UTC" firstStartedPulling="2026-01-29 08:43:44.714751327 +0000 UTC m=+166.356259483" lastFinishedPulling="2026-01-29 08:45:03.365315814 +0000 UTC m=+245.006823960" observedRunningTime="2026-01-29 08:45:04.721646804 +0000 UTC m=+246.363154980" watchObservedRunningTime="2026-01-29 08:45:04.722995579 +0000 UTC m=+246.364503715" Jan 29 08:45:04 crc kubenswrapper[4895]: I0129 08:45:04.750860 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nfrjk" podStartSLOduration=4.000875798 podStartE2EDuration="1m23.750833808s" podCreationTimestamp="2026-01-29 08:43:41 +0000 UTC" firstStartedPulling="2026-01-29 08:43:43.632815602 +0000 UTC m=+165.274323748" lastFinishedPulling="2026-01-29 08:45:03.382773612 +0000 UTC m=+245.024281758" observedRunningTime="2026-01-29 08:45:04.746660558 +0000 UTC m=+246.388168714" watchObservedRunningTime="2026-01-29 08:45:04.750833808 +0000 UTC m=+246.392341954" Jan 29 08:45:05 crc kubenswrapper[4895]: I0129 08:45:05.012461 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:45:05 crc kubenswrapper[4895]: I0129 08:45:05.012558 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:45:05 crc kubenswrapper[4895]: I0129 08:45:05.264326 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wtxmb" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" containerName="registry-server" probeResult="failure" output=< Jan 29 08:45:05 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 08:45:05 crc kubenswrapper[4895]: > Jan 29 08:45:05 crc kubenswrapper[4895]: I0129 08:45:05.698342 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b2r4l" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" containerName="registry-server" probeResult="failure" output=< Jan 29 08:45:05 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 08:45:05 crc kubenswrapper[4895]: > Jan 29 08:45:05 crc kubenswrapper[4895]: I0129 08:45:05.852175 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" Jan 29 08:45:05 crc kubenswrapper[4895]: I0129 08:45:05.907632 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d455t\" (UniqueName: \"kubernetes.io/projected/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-kube-api-access-d455t\") pod \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\" (UID: \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\") " Jan 29 08:45:05 crc kubenswrapper[4895]: I0129 08:45:05.907719 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-secret-volume\") pod \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\" (UID: \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\") " Jan 29 08:45:05 crc kubenswrapper[4895]: I0129 08:45:05.907779 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-config-volume\") pod \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\" (UID: \"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56\") " Jan 29 08:45:05 crc kubenswrapper[4895]: I0129 08:45:05.908795 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-config-volume" (OuterVolumeSpecName: "config-volume") pod "2cf95bf0-4949-44c6-9387-7fa2d4cf2b56" (UID: "2cf95bf0-4949-44c6-9387-7fa2d4cf2b56"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:45:05 crc kubenswrapper[4895]: I0129 08:45:05.914041 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-kube-api-access-d455t" (OuterVolumeSpecName: "kube-api-access-d455t") pod "2cf95bf0-4949-44c6-9387-7fa2d4cf2b56" (UID: "2cf95bf0-4949-44c6-9387-7fa2d4cf2b56"). InnerVolumeSpecName "kube-api-access-d455t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:05 crc kubenswrapper[4895]: I0129 08:45:05.915187 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2cf95bf0-4949-44c6-9387-7fa2d4cf2b56" (UID: "2cf95bf0-4949-44c6-9387-7fa2d4cf2b56"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:06 crc kubenswrapper[4895]: I0129 08:45:06.008936 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d455t\" (UniqueName: \"kubernetes.io/projected/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-kube-api-access-d455t\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:06 crc kubenswrapper[4895]: I0129 08:45:06.008993 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:06 crc kubenswrapper[4895]: I0129 08:45:06.009008 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:06 crc kubenswrapper[4895]: I0129 08:45:06.073531 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xgjxz" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" containerName="registry-server" probeResult="failure" output=< Jan 29 08:45:06 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 08:45:06 crc kubenswrapper[4895]: > Jan 29 08:45:06 crc kubenswrapper[4895]: I0129 08:45:06.571262 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" event={"ID":"2cf95bf0-4949-44c6-9387-7fa2d4cf2b56","Type":"ContainerDied","Data":"283dacf6daed7b832a5422e9b82143a4d27d0a1ea9b6c6f1a1545892d6e8ae8b"} Jan 29 08:45:06 crc kubenswrapper[4895]: I0129 08:45:06.571320 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="283dacf6daed7b832a5422e9b82143a4d27d0a1ea9b6c6f1a1545892d6e8ae8b" Jan 29 08:45:06 crc kubenswrapper[4895]: I0129 08:45:06.571331 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp" Jan 29 08:45:11 crc kubenswrapper[4895]: I0129 08:45:11.368421 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5fc8p"] Jan 29 08:45:11 crc kubenswrapper[4895]: I0129 08:45:11.489433 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:45:11 crc kubenswrapper[4895]: I0129 08:45:11.489728 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:45:11 crc kubenswrapper[4895]: I0129 08:45:11.598214 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:45:11 crc kubenswrapper[4895]: I0129 08:45:11.598283 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:45:11 crc kubenswrapper[4895]: I0129 08:45:11.718833 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:45:11 crc kubenswrapper[4895]: I0129 08:45:11.719903 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:45:11 crc kubenswrapper[4895]: I0129 08:45:11.783425 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:45:11 crc kubenswrapper[4895]: I0129 08:45:11.835825 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:45:11 crc kubenswrapper[4895]: I0129 08:45:11.835899 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:45:11 crc kubenswrapper[4895]: I0129 08:45:11.902537 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.009280 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.009338 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.051713 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.640136 4895 patch_prober.go:28] interesting pod/router-default-5444994796-f64b6 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.640682 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-f64b6" podUID="72fba804-18f2-4fae-addd-49c6b152c262" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.671899 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.677181 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.677581 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.862072 4895 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 08:45:12 crc kubenswrapper[4895]: E0129 08:45:12.862381 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf95bf0-4949-44c6-9387-7fa2d4cf2b56" containerName="collect-profiles" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.862400 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf95bf0-4949-44c6-9387-7fa2d4cf2b56" containerName="collect-profiles" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.862554 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf95bf0-4949-44c6-9387-7fa2d4cf2b56" containerName="collect-profiles" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.862962 4895 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.863109 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.863274 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a" gracePeriod=15 Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.863309 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a" gracePeriod=15 Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.863347 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0" gracePeriod=15 Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.863416 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124" gracePeriod=15 Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.863424 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855" gracePeriod=15 Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.865232 4895 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 08:45:12 crc kubenswrapper[4895]: E0129 08:45:12.865401 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.865414 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 08:45:12 crc kubenswrapper[4895]: E0129 08:45:12.865422 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.865429 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 08:45:12 crc kubenswrapper[4895]: E0129 08:45:12.865441 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.865449 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:45:12 crc kubenswrapper[4895]: E0129 08:45:12.865460 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.865469 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 08:45:12 crc kubenswrapper[4895]: E0129 08:45:12.865481 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.865488 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:45:12 crc kubenswrapper[4895]: E0129 08:45:12.865500 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.865507 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 08:45:12 crc kubenswrapper[4895]: E0129 08:45:12.865515 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.865524 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.865645 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.865656 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.865665 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.865671 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.865683 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 08:45:12 crc kubenswrapper[4895]: I0129 08:45:12.865962 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.014386 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.014467 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.014497 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.014563 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.014579 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.014598 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.014615 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.014637 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116175 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116303 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116340 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116406 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116476 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116541 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116592 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116622 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116651 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116651 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116684 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116729 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116767 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116678 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116798 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.116861 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.456832 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.457448 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.531736 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.532619 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.622145 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.624477 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.625414 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855" exitCode=2 Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.669821 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.670758 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.834414 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.835096 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.835518 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.871733 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.872636 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:13 crc kubenswrapper[4895]: I0129 08:45:13.873335 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:14 crc kubenswrapper[4895]: I0129 08:45:14.632214 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 08:45:14 crc kubenswrapper[4895]: I0129 08:45:14.633766 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 08:45:14 crc kubenswrapper[4895]: I0129 08:45:14.634705 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124" exitCode=0 Jan 29 08:45:14 crc kubenswrapper[4895]: I0129 08:45:14.672210 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:45:14 crc kubenswrapper[4895]: I0129 08:45:14.673983 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:14 crc kubenswrapper[4895]: I0129 08:45:14.674229 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:14 crc kubenswrapper[4895]: I0129 08:45:14.674490 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:14 crc kubenswrapper[4895]: I0129 08:45:14.724132 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:45:14 crc kubenswrapper[4895]: I0129 08:45:14.725097 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:14 crc kubenswrapper[4895]: I0129 08:45:14.725299 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:14 crc kubenswrapper[4895]: I0129 08:45:14.725767 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.147030 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.148459 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.150266 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.150611 4895 status_manager.go:851] "Failed to get status for pod" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" pod="openshift-marketplace/redhat-operators-xgjxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xgjxz\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.150806 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.218293 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.218899 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.219152 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.219331 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.219495 4895 status_manager.go:851] "Failed to get status for pod" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" pod="openshift-marketplace/redhat-operators-xgjxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xgjxz\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.643145 4895 generic.go:334] "Generic (PLEG): container finished" podID="e4be033f-c03a-4c59-897e-3b03190e597a" containerID="28bfb8db8ac08f06b2640d6de166d2809e3c8ea3fac597f8ba213163531d5d62" exitCode=0 Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.643255 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e4be033f-c03a-4c59-897e-3b03190e597a","Type":"ContainerDied","Data":"28bfb8db8ac08f06b2640d6de166d2809e3c8ea3fac597f8ba213163531d5d62"} Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.644124 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.644419 4895 status_manager.go:851] "Failed to get status for pod" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" pod="openshift-marketplace/redhat-operators-xgjxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xgjxz\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.645012 4895 status_manager.go:851] "Failed to get status for pod" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.645347 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.645839 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.645872 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.647583 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.648415 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a" exitCode=0 Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.648440 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0" exitCode=0 Jan 29 08:45:15 crc kubenswrapper[4895]: I0129 08:45:15.648500 4895 scope.go:117] "RemoveContainer" containerID="4b1b7421be50eb12fd5d70fc001b446c7c7065dd684f692549e606ebe0ab5730" Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.657810 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.659079 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a" exitCode=0 Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.908168 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.909667 4895 status_manager.go:851] "Failed to get status for pod" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.910360 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.911054 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.911388 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.911748 4895 status_manager.go:851] "Failed to get status for pod" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" pod="openshift-marketplace/redhat-operators-xgjxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xgjxz\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.998608 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e4be033f-c03a-4c59-897e-3b03190e597a-var-lock\") pod \"e4be033f-c03a-4c59-897e-3b03190e597a\" (UID: \"e4be033f-c03a-4c59-897e-3b03190e597a\") " Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.998698 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4be033f-c03a-4c59-897e-3b03190e597a-kubelet-dir\") pod \"e4be033f-c03a-4c59-897e-3b03190e597a\" (UID: \"e4be033f-c03a-4c59-897e-3b03190e597a\") " Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.998775 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4be033f-c03a-4c59-897e-3b03190e597a-kube-api-access\") pod \"e4be033f-c03a-4c59-897e-3b03190e597a\" (UID: \"e4be033f-c03a-4c59-897e-3b03190e597a\") " Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.998777 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4be033f-c03a-4c59-897e-3b03190e597a-var-lock" (OuterVolumeSpecName: "var-lock") pod "e4be033f-c03a-4c59-897e-3b03190e597a" (UID: "e4be033f-c03a-4c59-897e-3b03190e597a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.998829 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4be033f-c03a-4c59-897e-3b03190e597a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e4be033f-c03a-4c59-897e-3b03190e597a" (UID: "e4be033f-c03a-4c59-897e-3b03190e597a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.999222 4895 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e4be033f-c03a-4c59-897e-3b03190e597a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:16 crc kubenswrapper[4895]: I0129 08:45:16.999250 4895 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4be033f-c03a-4c59-897e-3b03190e597a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.009037 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4be033f-c03a-4c59-897e-3b03190e597a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e4be033f-c03a-4c59-897e-3b03190e597a" (UID: "e4be033f-c03a-4c59-897e-3b03190e597a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.100876 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4be033f-c03a-4c59-897e-3b03190e597a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.157443 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.158669 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.159506 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.160240 4895 status_manager.go:851] "Failed to get status for pod" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" pod="openshift-marketplace/redhat-operators-xgjxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xgjxz\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.160803 4895 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.161096 4895 status_manager.go:851] "Failed to get status for pod" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.161433 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.161773 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.201812 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.201900 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.201992 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.202032 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.202069 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.202170 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.202316 4895 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.202339 4895 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.202349 4895 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.218617 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.665992 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.666847 4895 scope.go:117] "RemoveContainer" containerID="37c1c7610f8741de4905335b5101f48e698e0bf1139b22917c6cff1cac22bd9a" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.666985 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.667832 4895 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.668355 4895 status_manager.go:851] "Failed to get status for pod" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.669788 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.670105 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e4be033f-c03a-4c59-897e-3b03190e597a","Type":"ContainerDied","Data":"457594cb91617af3504a759418b09f7182e95630585fde49c90c02db923e26f6"} Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.670143 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="457594cb91617af3504a759418b09f7182e95630585fde49c90c02db923e26f6" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.670223 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.670240 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.670444 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.671036 4895 status_manager.go:851] "Failed to get status for pod" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" pod="openshift-marketplace/redhat-operators-xgjxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xgjxz\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.671739 4895 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.672083 4895 status_manager.go:851] "Failed to get status for pod" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.672567 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.673712 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.676047 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.676523 4895 status_manager.go:851] "Failed to get status for pod" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" pod="openshift-marketplace/redhat-operators-xgjxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xgjxz\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.676824 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.677052 4895 status_manager.go:851] "Failed to get status for pod" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" pod="openshift-marketplace/redhat-operators-xgjxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xgjxz\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.677252 4895 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.677431 4895 status_manager.go:851] "Failed to get status for pod" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.677577 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.677721 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.689185 4895 scope.go:117] "RemoveContainer" containerID="a7caab6d02161b54a236ff34d1e6d6a8e6ba5e475f629f2c0df717a406fa3124" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.710905 4895 scope.go:117] "RemoveContainer" containerID="d527ae2d9d0cf81448cd4e797ccb570691205847324e15e71c9167612c8a48f0" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.726234 4895 scope.go:117] "RemoveContainer" containerID="855d283c327be6b8f6a591dc1f25af154967bb9cde534ca1ef495f6f39be0855" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.752838 4895 scope.go:117] "RemoveContainer" containerID="8af119f15a816bbebbb826c50c207b9881a6ed2692f79b64f3d6912c8be6372a" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.767381 4895 scope.go:117] "RemoveContainer" containerID="2c6f1c94a36c2721a86f01dce3a11758498a6cc95e0387ec101a3724e9ef3c49" Jan 29 08:45:17 crc kubenswrapper[4895]: E0129 08:45:17.899874 4895 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:17 crc kubenswrapper[4895]: I0129 08:45:17.900282 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:17 crc kubenswrapper[4895]: W0129 08:45:17.927503 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-1e32883bf41e2f316e8ffffdb0f2d26aa9b67473f2d13962481f4219fcaf1e60 WatchSource:0}: Error finding container 1e32883bf41e2f316e8ffffdb0f2d26aa9b67473f2d13962481f4219fcaf1e60: Status 404 returned error can't find the container with id 1e32883bf41e2f316e8ffffdb0f2d26aa9b67473f2d13962481f4219fcaf1e60 Jan 29 08:45:17 crc kubenswrapper[4895]: E0129 08:45:17.930675 4895 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.142:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f273c29a6b961 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 08:45:17.930207585 +0000 UTC m=+259.571715731,LastTimestamp:2026-01-29 08:45:17.930207585 +0000 UTC m=+259.571715731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 08:45:18 crc kubenswrapper[4895]: I0129 08:45:18.683056 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9b766a639aff81078c347213e8cd69ac8556b4131637ca065860b3413d431415"} Jan 29 08:45:18 crc kubenswrapper[4895]: I0129 08:45:18.683109 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1e32883bf41e2f316e8ffffdb0f2d26aa9b67473f2d13962481f4219fcaf1e60"} Jan 29 08:45:18 crc kubenswrapper[4895]: I0129 08:45:18.684144 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:18 crc kubenswrapper[4895]: E0129 08:45:18.684238 4895 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:45:18 crc kubenswrapper[4895]: I0129 08:45:18.684380 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:18 crc kubenswrapper[4895]: I0129 08:45:18.684735 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:18 crc kubenswrapper[4895]: I0129 08:45:18.685748 4895 status_manager.go:851] "Failed to get status for pod" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" pod="openshift-marketplace/redhat-operators-xgjxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xgjxz\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:18 crc kubenswrapper[4895]: I0129 08:45:18.686080 4895 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:18 crc kubenswrapper[4895]: I0129 08:45:18.686438 4895 status_manager.go:851] "Failed to get status for pod" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:19 crc kubenswrapper[4895]: I0129 08:45:19.214072 4895 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:19 crc kubenswrapper[4895]: I0129 08:45:19.214457 4895 status_manager.go:851] "Failed to get status for pod" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:19 crc kubenswrapper[4895]: I0129 08:45:19.214780 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:19 crc kubenswrapper[4895]: I0129 08:45:19.215048 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:19 crc kubenswrapper[4895]: I0129 08:45:19.215288 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:19 crc kubenswrapper[4895]: I0129 08:45:19.215473 4895 status_manager.go:851] "Failed to get status for pod" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" pod="openshift-marketplace/redhat-operators-xgjxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xgjxz\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:20 crc kubenswrapper[4895]: E0129 08:45:20.820872 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:20 crc kubenswrapper[4895]: E0129 08:45:20.821129 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:20 crc kubenswrapper[4895]: E0129 08:45:20.821306 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:20 crc kubenswrapper[4895]: E0129 08:45:20.821471 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:20 crc kubenswrapper[4895]: E0129 08:45:20.821665 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:20 crc kubenswrapper[4895]: I0129 08:45:20.821691 4895 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 08:45:20 crc kubenswrapper[4895]: E0129 08:45:20.821893 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" interval="200ms" Jan 29 08:45:21 crc kubenswrapper[4895]: E0129 08:45:21.023181 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" interval="400ms" Jan 29 08:45:21 crc kubenswrapper[4895]: E0129 08:45:21.249300 4895 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.142:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" volumeName="registry-storage" Jan 29 08:45:21 crc kubenswrapper[4895]: E0129 08:45:21.378178 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:45:21Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:45:21Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:45:21Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T08:45:21Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:21 crc kubenswrapper[4895]: E0129 08:45:21.378422 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:21 crc kubenswrapper[4895]: E0129 08:45:21.378592 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:21 crc kubenswrapper[4895]: E0129 08:45:21.378749 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:21 crc kubenswrapper[4895]: E0129 08:45:21.378897 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:21 crc kubenswrapper[4895]: E0129 08:45:21.378937 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 08:45:21 crc kubenswrapper[4895]: E0129 08:45:21.424835 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" interval="800ms" Jan 29 08:45:22 crc kubenswrapper[4895]: E0129 08:45:22.225704 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" interval="1.6s" Jan 29 08:45:23 crc kubenswrapper[4895]: E0129 08:45:23.827654 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" interval="3.2s" Jan 29 08:45:25 crc kubenswrapper[4895]: I0129 08:45:25.730674 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 08:45:25 crc kubenswrapper[4895]: I0129 08:45:25.730744 4895 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c" exitCode=1 Jan 29 08:45:25 crc kubenswrapper[4895]: I0129 08:45:25.730784 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c"} Jan 29 08:45:25 crc kubenswrapper[4895]: I0129 08:45:25.731398 4895 scope.go:117] "RemoveContainer" containerID="0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c" Jan 29 08:45:25 crc kubenswrapper[4895]: I0129 08:45:25.731760 4895 status_manager.go:851] "Failed to get status for pod" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:25 crc kubenswrapper[4895]: I0129 08:45:25.732226 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:25 crc kubenswrapper[4895]: I0129 08:45:25.732693 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:25 crc kubenswrapper[4895]: I0129 08:45:25.733065 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:25 crc kubenswrapper[4895]: I0129 08:45:25.733421 4895 status_manager.go:851] "Failed to get status for pod" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" pod="openshift-marketplace/redhat-operators-xgjxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xgjxz\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:25 crc kubenswrapper[4895]: I0129 08:45:25.733712 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:25 crc kubenswrapper[4895]: I0129 08:45:25.748620 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.210745 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.212497 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.213213 4895 status_manager.go:851] "Failed to get status for pod" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" pod="openshift-marketplace/redhat-operators-xgjxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xgjxz\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.213496 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.213734 4895 status_manager.go:851] "Failed to get status for pod" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.214444 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.214798 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.230831 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="474926ed-2673-4f4d-b872-3072054ba68e" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.230868 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="474926ed-2673-4f4d-b872-3072054ba68e" Jan 29 08:45:26 crc kubenswrapper[4895]: E0129 08:45:26.231837 4895 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.232431 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:26 crc kubenswrapper[4895]: W0129 08:45:26.258611 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-a44844b549566b0368c9486b268bc5edd5cc765c34247fb7e69f449f7378b4dd WatchSource:0}: Error finding container a44844b549566b0368c9486b268bc5edd5cc765c34247fb7e69f449f7378b4dd: Status 404 returned error can't find the container with id a44844b549566b0368c9486b268bc5edd5cc765c34247fb7e69f449f7378b4dd Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.739960 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"575aaf3fec1d3600d42ff72fb93d5543d0fcd223796a2ee085409934722cc66e"} Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.740456 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a44844b549566b0368c9486b268bc5edd5cc765c34247fb7e69f449f7378b4dd"} Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.746167 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.746258 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cc834c31c8829d0938ffeb98bb5af2719656c638e9d073fe487d163c01a84319"} Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.747264 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.747571 4895 status_manager.go:851] "Failed to get status for pod" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" pod="openshift-marketplace/redhat-operators-xgjxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xgjxz\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.747892 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.748437 4895 status_manager.go:851] "Failed to get status for pod" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.748697 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:26 crc kubenswrapper[4895]: I0129 08:45:26.748995 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:27 crc kubenswrapper[4895]: E0129 08:45:27.030899 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.142:6443: connect: connection refused" interval="6.4s" Jan 29 08:45:27 crc kubenswrapper[4895]: E0129 08:45:27.561032 4895 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.142:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f273c29a6b961 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 08:45:17.930207585 +0000 UTC m=+259.571715731,LastTimestamp:2026-01-29 08:45:17.930207585 +0000 UTC m=+259.571715731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 08:45:27 crc kubenswrapper[4895]: I0129 08:45:27.754947 4895 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="575aaf3fec1d3600d42ff72fb93d5543d0fcd223796a2ee085409934722cc66e" exitCode=0 Jan 29 08:45:27 crc kubenswrapper[4895]: I0129 08:45:27.755095 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"575aaf3fec1d3600d42ff72fb93d5543d0fcd223796a2ee085409934722cc66e"} Jan 29 08:45:27 crc kubenswrapper[4895]: I0129 08:45:27.755547 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="474926ed-2673-4f4d-b872-3072054ba68e" Jan 29 08:45:27 crc kubenswrapper[4895]: I0129 08:45:27.755582 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="474926ed-2673-4f4d-b872-3072054ba68e" Jan 29 08:45:27 crc kubenswrapper[4895]: E0129 08:45:27.756210 4895 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:27 crc kubenswrapper[4895]: I0129 08:45:27.756224 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:27 crc kubenswrapper[4895]: I0129 08:45:27.756881 4895 status_manager.go:851] "Failed to get status for pod" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:27 crc kubenswrapper[4895]: I0129 08:45:27.757280 4895 status_manager.go:851] "Failed to get status for pod" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" pod="openshift-marketplace/redhat-marketplace-85kbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-85kbq\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:27 crc kubenswrapper[4895]: I0129 08:45:27.757688 4895 status_manager.go:851] "Failed to get status for pod" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" pod="openshift-marketplace/redhat-operators-b2r4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-b2r4l\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:27 crc kubenswrapper[4895]: I0129 08:45:27.758159 4895 status_manager.go:851] "Failed to get status for pod" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" pod="openshift-marketplace/redhat-operators-xgjxz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xgjxz\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:27 crc kubenswrapper[4895]: I0129 08:45:27.758685 4895 status_manager.go:851] "Failed to get status for pod" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" pod="openshift-marketplace/redhat-marketplace-wtxmb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-wtxmb\": dial tcp 38.129.56.142:6443: connect: connection refused" Jan 29 08:45:28 crc kubenswrapper[4895]: I0129 08:45:28.596268 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:45:28 crc kubenswrapper[4895]: I0129 08:45:28.596389 4895 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 08:45:28 crc kubenswrapper[4895]: I0129 08:45:28.596724 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 08:45:28 crc kubenswrapper[4895]: I0129 08:45:28.778535 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"57a567b3839300947532ebf6ec33cd3711e67f35ec750886d96ee9a6bf175a46"} Jan 29 08:45:28 crc kubenswrapper[4895]: I0129 08:45:28.778589 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"913a978936c30cb93196d8d924aa8ac95d5eb7761028403eb8752904dcf3d531"} Jan 29 08:45:28 crc kubenswrapper[4895]: I0129 08:45:28.778607 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"226c34c934251149966d41d8327e6c6e2c414750ae775fab484e37ff3abae0a6"} Jan 29 08:45:28 crc kubenswrapper[4895]: I0129 08:45:28.778619 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4af15be510391b1e8b679e23ad23c27bcbb5b1c1252c8e3cdcfad59fe5a4ac5a"} Jan 29 08:45:29 crc kubenswrapper[4895]: I0129 08:45:29.788438 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a9050dd99d05b6724ff52be72bd4918ac63ef38b55cc75771bf9b86d7b4902cc"} Jan 29 08:45:29 crc kubenswrapper[4895]: I0129 08:45:29.789033 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="474926ed-2673-4f4d-b872-3072054ba68e" Jan 29 08:45:29 crc kubenswrapper[4895]: I0129 08:45:29.789066 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="474926ed-2673-4f4d-b872-3072054ba68e" Jan 29 08:45:29 crc kubenswrapper[4895]: I0129 08:45:29.789049 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:31 crc kubenswrapper[4895]: I0129 08:45:31.233066 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:31 crc kubenswrapper[4895]: I0129 08:45:31.233129 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:31 crc kubenswrapper[4895]: I0129 08:45:31.241119 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:32 crc kubenswrapper[4895]: I0129 08:45:32.038009 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:45:34 crc kubenswrapper[4895]: I0129 08:45:34.801872 4895 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:34 crc kubenswrapper[4895]: I0129 08:45:34.828036 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="474926ed-2673-4f4d-b872-3072054ba68e" Jan 29 08:45:34 crc kubenswrapper[4895]: I0129 08:45:34.828089 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="474926ed-2673-4f4d-b872-3072054ba68e" Jan 29 08:45:34 crc kubenswrapper[4895]: I0129 08:45:34.841283 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:34 crc kubenswrapper[4895]: I0129 08:45:34.844185 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6b6bc013-4294-4f19-af90-47bf04dda723" Jan 29 08:45:35 crc kubenswrapper[4895]: I0129 08:45:35.836358 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="474926ed-2673-4f4d-b872-3072054ba68e" Jan 29 08:45:35 crc kubenswrapper[4895]: I0129 08:45:35.836400 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="474926ed-2673-4f4d-b872-3072054ba68e" Jan 29 08:45:35 crc kubenswrapper[4895]: I0129 08:45:35.842871 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6b6bc013-4294-4f19-af90-47bf04dda723" Jan 29 08:45:36 crc kubenswrapper[4895]: I0129 08:45:36.411400 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" podUID="e6030804-d717-42c9-b2b2-8eaaadaddca0" containerName="oauth-openshift" containerID="cri-o://836a52176201c465f32ab5d24d654f439b5d07ec1f3af7069997addb260a0041" gracePeriod=15 Jan 29 08:45:36 crc kubenswrapper[4895]: I0129 08:45:36.844594 4895 generic.go:334] "Generic (PLEG): container finished" podID="e6030804-d717-42c9-b2b2-8eaaadaddca0" containerID="836a52176201c465f32ab5d24d654f439b5d07ec1f3af7069997addb260a0041" exitCode=0 Jan 29 08:45:36 crc kubenswrapper[4895]: I0129 08:45:36.844643 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" event={"ID":"e6030804-d717-42c9-b2b2-8eaaadaddca0","Type":"ContainerDied","Data":"836a52176201c465f32ab5d24d654f439b5d07ec1f3af7069997addb260a0041"} Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.318723 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.399694 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-cliconfig\") pod \"e6030804-d717-42c9-b2b2-8eaaadaddca0\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.399738 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-login\") pod \"e6030804-d717-42c9-b2b2-8eaaadaddca0\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.399814 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkccl\" (UniqueName: \"kubernetes.io/projected/e6030804-d717-42c9-b2b2-8eaaadaddca0-kube-api-access-fkccl\") pod \"e6030804-d717-42c9-b2b2-8eaaadaddca0\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.399856 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-idp-0-file-data\") pod \"e6030804-d717-42c9-b2b2-8eaaadaddca0\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.399888 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-serving-cert\") pod \"e6030804-d717-42c9-b2b2-8eaaadaddca0\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.399962 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-ocp-branding-template\") pod \"e6030804-d717-42c9-b2b2-8eaaadaddca0\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.399986 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-trusted-ca-bundle\") pod \"e6030804-d717-42c9-b2b2-8eaaadaddca0\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.400033 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-session\") pod \"e6030804-d717-42c9-b2b2-8eaaadaddca0\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.400078 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6030804-d717-42c9-b2b2-8eaaadaddca0-audit-dir\") pod \"e6030804-d717-42c9-b2b2-8eaaadaddca0\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.400098 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-error\") pod \"e6030804-d717-42c9-b2b2-8eaaadaddca0\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.400132 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-router-certs\") pod \"e6030804-d717-42c9-b2b2-8eaaadaddca0\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.400152 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-audit-policies\") pod \"e6030804-d717-42c9-b2b2-8eaaadaddca0\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.400301 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6030804-d717-42c9-b2b2-8eaaadaddca0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e6030804-d717-42c9-b2b2-8eaaadaddca0" (UID: "e6030804-d717-42c9-b2b2-8eaaadaddca0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.401018 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e6030804-d717-42c9-b2b2-8eaaadaddca0" (UID: "e6030804-d717-42c9-b2b2-8eaaadaddca0"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.401090 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-service-ca\") pod \"e6030804-d717-42c9-b2b2-8eaaadaddca0\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.401154 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-provider-selection\") pod \"e6030804-d717-42c9-b2b2-8eaaadaddca0\" (UID: \"e6030804-d717-42c9-b2b2-8eaaadaddca0\") " Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.401412 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e6030804-d717-42c9-b2b2-8eaaadaddca0" (UID: "e6030804-d717-42c9-b2b2-8eaaadaddca0"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.401766 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e6030804-d717-42c9-b2b2-8eaaadaddca0" (UID: "e6030804-d717-42c9-b2b2-8eaaadaddca0"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.401777 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e6030804-d717-42c9-b2b2-8eaaadaddca0" (UID: "e6030804-d717-42c9-b2b2-8eaaadaddca0"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.402222 4895 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.402241 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.402252 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.402262 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.402274 4895 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e6030804-d717-42c9-b2b2-8eaaadaddca0-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.407669 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e6030804-d717-42c9-b2b2-8eaaadaddca0" (UID: "e6030804-d717-42c9-b2b2-8eaaadaddca0"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.407973 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e6030804-d717-42c9-b2b2-8eaaadaddca0" (UID: "e6030804-d717-42c9-b2b2-8eaaadaddca0"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.408073 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e6030804-d717-42c9-b2b2-8eaaadaddca0" (UID: "e6030804-d717-42c9-b2b2-8eaaadaddca0"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.410381 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e6030804-d717-42c9-b2b2-8eaaadaddca0" (UID: "e6030804-d717-42c9-b2b2-8eaaadaddca0"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.410413 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6030804-d717-42c9-b2b2-8eaaadaddca0-kube-api-access-fkccl" (OuterVolumeSpecName: "kube-api-access-fkccl") pod "e6030804-d717-42c9-b2b2-8eaaadaddca0" (UID: "e6030804-d717-42c9-b2b2-8eaaadaddca0"). InnerVolumeSpecName "kube-api-access-fkccl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.410613 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e6030804-d717-42c9-b2b2-8eaaadaddca0" (UID: "e6030804-d717-42c9-b2b2-8eaaadaddca0"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.410854 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e6030804-d717-42c9-b2b2-8eaaadaddca0" (UID: "e6030804-d717-42c9-b2b2-8eaaadaddca0"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.411818 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e6030804-d717-42c9-b2b2-8eaaadaddca0" (UID: "e6030804-d717-42c9-b2b2-8eaaadaddca0"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.414177 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e6030804-d717-42c9-b2b2-8eaaadaddca0" (UID: "e6030804-d717-42c9-b2b2-8eaaadaddca0"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.504153 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkccl\" (UniqueName: \"kubernetes.io/projected/e6030804-d717-42c9-b2b2-8eaaadaddca0-kube-api-access-fkccl\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.504533 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.504613 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.504696 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.504792 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.504857 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.504962 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.505026 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.505084 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e6030804-d717-42c9-b2b2-8eaaadaddca0-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.852247 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" event={"ID":"e6030804-d717-42c9-b2b2-8eaaadaddca0","Type":"ContainerDied","Data":"00a0b2a8cf5d55eb4278ec6d48fafed44276b8ca0613398f5f3165e716b59d56"} Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.852315 4895 scope.go:117] "RemoveContainer" containerID="836a52176201c465f32ab5d24d654f439b5d07ec1f3af7069997addb260a0041" Jan 29 08:45:37 crc kubenswrapper[4895]: I0129 08:45:37.853130 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5fc8p" Jan 29 08:45:38 crc kubenswrapper[4895]: I0129 08:45:38.595575 4895 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 08:45:38 crc kubenswrapper[4895]: I0129 08:45:38.595663 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 08:45:43 crc kubenswrapper[4895]: I0129 08:45:43.858297 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 08:45:44 crc kubenswrapper[4895]: I0129 08:45:44.450072 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 08:45:44 crc kubenswrapper[4895]: I0129 08:45:44.668375 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 08:45:45 crc kubenswrapper[4895]: I0129 08:45:45.249070 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 08:45:45 crc kubenswrapper[4895]: I0129 08:45:45.268180 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 08:45:45 crc kubenswrapper[4895]: I0129 08:45:45.420325 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 08:45:45 crc kubenswrapper[4895]: I0129 08:45:45.968079 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 08:45:46 crc kubenswrapper[4895]: I0129 08:45:46.065856 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 08:45:46 crc kubenswrapper[4895]: I0129 08:45:46.419322 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 08:45:46 crc kubenswrapper[4895]: I0129 08:45:46.678651 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 08:45:46 crc kubenswrapper[4895]: I0129 08:45:46.877437 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 08:45:46 crc kubenswrapper[4895]: I0129 08:45:46.893825 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 08:45:47 crc kubenswrapper[4895]: I0129 08:45:47.024998 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 08:45:47 crc kubenswrapper[4895]: I0129 08:45:47.036537 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 08:45:47 crc kubenswrapper[4895]: I0129 08:45:47.086774 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 08:45:47 crc kubenswrapper[4895]: I0129 08:45:47.102081 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 08:45:47 crc kubenswrapper[4895]: I0129 08:45:47.153944 4895 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 08:45:47 crc kubenswrapper[4895]: I0129 08:45:47.240830 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 08:45:47 crc kubenswrapper[4895]: I0129 08:45:47.363426 4895 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 08:45:47 crc kubenswrapper[4895]: I0129 08:45:47.935657 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.034762 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.044125 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.081074 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.293584 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.311736 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.404287 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.405235 4895 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.518848 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.546537 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.595895 4895 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.596104 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.596222 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.597363 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"cc834c31c8829d0938ffeb98bb5af2719656c638e9d073fe487d163c01a84319"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.597608 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://cc834c31c8829d0938ffeb98bb5af2719656c638e9d073fe487d163c01a84319" gracePeriod=30 Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.770470 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.818458 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.928722 4895 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.931094 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.952394 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 08:45:48 crc kubenswrapper[4895]: I0129 08:45:48.987723 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 08:45:49 crc kubenswrapper[4895]: I0129 08:45:49.030420 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 08:45:49 crc kubenswrapper[4895]: I0129 08:45:49.098472 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 08:45:49 crc kubenswrapper[4895]: I0129 08:45:49.178299 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 08:45:49 crc kubenswrapper[4895]: I0129 08:45:49.302603 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 08:45:49 crc kubenswrapper[4895]: I0129 08:45:49.351869 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 08:45:49 crc kubenswrapper[4895]: I0129 08:45:49.453006 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 08:45:49 crc kubenswrapper[4895]: I0129 08:45:49.466511 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 08:45:49 crc kubenswrapper[4895]: I0129 08:45:49.515773 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 08:45:49 crc kubenswrapper[4895]: I0129 08:45:49.527379 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 08:45:49 crc kubenswrapper[4895]: I0129 08:45:49.626385 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 08:45:49 crc kubenswrapper[4895]: I0129 08:45:49.689400 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 08:45:49 crc kubenswrapper[4895]: I0129 08:45:49.826431 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 08:45:49 crc kubenswrapper[4895]: I0129 08:45:49.968368 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.017483 4895 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.031275 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.041112 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.084352 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.188539 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.190378 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.263505 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.286403 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.425063 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.434858 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.496594 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.503907 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.539038 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.545211 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.548000 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.590033 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.774173 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.774547 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.863210 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 08:45:50 crc kubenswrapper[4895]: I0129 08:45:50.919697 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.011814 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.063764 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.085488 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.158844 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.164471 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.220591 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.225256 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.255536 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.268071 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.366670 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.379722 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.437904 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.538984 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.644314 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.655871 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.684241 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.685687 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.748561 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.813615 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.875760 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.923854 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.923867 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 08:45:51 crc kubenswrapper[4895]: I0129 08:45:51.965355 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.004220 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.035835 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.147695 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.215484 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.309362 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.311906 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.361462 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.466338 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.469586 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.511124 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.526336 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.610866 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.625269 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.664384 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.829046 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.860444 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.898798 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.920664 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 08:45:52 crc kubenswrapper[4895]: I0129 08:45:52.987466 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.005980 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.029601 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.043755 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.177489 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.284535 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.303186 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.307714 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.379342 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.418401 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.519048 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.531050 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.549487 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.552010 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.586420 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.646666 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.682771 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.689092 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.780043 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 08:45:53 crc kubenswrapper[4895]: I0129 08:45:53.841746 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.002776 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.041208 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.053376 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.122256 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.141779 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.171629 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.203187 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.312042 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.319362 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.365959 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.388888 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.430960 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.438137 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.527363 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.532721 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.560967 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.586780 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.714462 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.820198 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.851114 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.861893 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.878809 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.880564 4895 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.886309 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-5fc8p"] Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.886382 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 08:45:54 crc kubenswrapper[4895]: E0129 08:45:54.886614 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6030804-d717-42c9-b2b2-8eaaadaddca0" containerName="oauth-openshift" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.886631 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6030804-d717-42c9-b2b2-8eaaadaddca0" containerName="oauth-openshift" Jan 29 08:45:54 crc kubenswrapper[4895]: E0129 08:45:54.886641 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" containerName="installer" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.886648 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" containerName="installer" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.886776 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4be033f-c03a-4c59-897e-3b03190e597a" containerName="installer" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.886791 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6030804-d717-42c9-b2b2-8eaaadaddca0" containerName="oauth-openshift" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.886963 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="474926ed-2673-4f4d-b872-3072054ba68e" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.887014 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="474926ed-2673-4f4d-b872-3072054ba68e" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.887245 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.890453 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.890537 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.890582 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.890662 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.891045 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.891288 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.891372 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.891458 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.892205 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.892246 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.892284 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.892543 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.897943 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.903557 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.904645 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.912899 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.925819 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.92579693 podStartE2EDuration="20.92579693s" podCreationTimestamp="2026-01-29 08:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:45:54.921673383 +0000 UTC m=+296.563181529" watchObservedRunningTime="2026-01-29 08:45:54.92579693 +0000 UTC m=+296.567305076" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.948910 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-session\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.948994 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-audit-dir\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.949042 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.949182 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.949307 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.949393 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-audit-policies\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.949436 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-user-template-error\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.949488 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.949535 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-user-template-login\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.949576 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5l4b\" (UniqueName: \"kubernetes.io/projected/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-kube-api-access-v5l4b\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.949610 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.949639 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.949682 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.949703 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.977137 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 08:45:54 crc kubenswrapper[4895]: I0129 08:45:54.978841 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.014296 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.036930 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.052424 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.054646 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-audit-policies\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.054735 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-user-template-error\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.054811 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.054833 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-user-template-login\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.054856 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.054881 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5l4b\" (UniqueName: \"kubernetes.io/projected/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-kube-api-access-v5l4b\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.054932 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.054967 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.054990 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.055106 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-session\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.055135 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-audit-dir\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.055200 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.055248 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.055268 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.056006 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-audit-policies\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.056095 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-audit-dir\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.056907 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.057026 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.057654 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.061697 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.061790 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.062475 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-user-template-error\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.062533 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-session\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.062644 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.064515 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.067112 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.068175 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-v4-0-config-user-template-login\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.080721 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5l4b\" (UniqueName: \"kubernetes.io/projected/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1-kube-api-access-v5l4b\") pod \"oauth-openshift-b5bbf7b69-9x9sp\" (UID: \"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\") " pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.085599 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.214163 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.219810 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6030804-d717-42c9-b2b2-8eaaadaddca0" path="/var/lib/kubelet/pods/e6030804-d717-42c9-b2b2-8eaaadaddca0/volumes" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.243127 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.257442 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.288446 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.366181 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.381586 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.455545 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.547256 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.675754 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp"] Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.721020 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.743283 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.827585 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.873033 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.891795 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 08:45:55 crc kubenswrapper[4895]: I0129 08:45:55.927274 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.167800 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.233626 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.294001 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.300654 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.351599 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.389698 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.449619 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.546990 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.561595 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.637940 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.675933 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.699095 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.859369 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.916028 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 08:45:56 crc kubenswrapper[4895]: I0129 08:45:56.927789 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.013768 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.084228 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.106798 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.121530 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.139330 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.215248 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.294559 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.329593 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.405391 4895 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.405735 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9b766a639aff81078c347213e8cd69ac8556b4131637ca065860b3413d431415" gracePeriod=5 Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.531709 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.552225 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.629673 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.767776 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 08:45:57 crc kubenswrapper[4895]: I0129 08:45:57.819158 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 08:45:58 crc kubenswrapper[4895]: I0129 08:45:58.094194 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 08:45:58 crc kubenswrapper[4895]: I0129 08:45:58.111844 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 08:45:58 crc kubenswrapper[4895]: E0129 08:45:58.152688 4895 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 29 08:45:58 crc kubenswrapper[4895]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-b5bbf7b69-9x9sp_openshift-authentication_91417fe9-4f8b-4e92-8d34-7f05f41ef6c1_0(cb4d6713926992ee7b10d183c0bbad0134efb1b0934826d29d04a5c37e4eb19c): error adding pod openshift-authentication_oauth-openshift-b5bbf7b69-9x9sp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cb4d6713926992ee7b10d183c0bbad0134efb1b0934826d29d04a5c37e4eb19c" Netns:"/var/run/netns/21d66dc8-9d7b-4495-97d5-09001a6fe6ff" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-b5bbf7b69-9x9sp;K8S_POD_INFRA_CONTAINER_ID=cb4d6713926992ee7b10d183c0bbad0134efb1b0934826d29d04a5c37e4eb19c;K8S_POD_UID=91417fe9-4f8b-4e92-8d34-7f05f41ef6c1" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp] networking: Multus: [openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-b5bbf7b69-9x9sp in out of cluster comm: pod "oauth-openshift-b5bbf7b69-9x9sp" not found Jan 29 08:45:58 crc kubenswrapper[4895]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 08:45:58 crc kubenswrapper[4895]: > Jan 29 08:45:58 crc kubenswrapper[4895]: E0129 08:45:58.152800 4895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 29 08:45:58 crc kubenswrapper[4895]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-b5bbf7b69-9x9sp_openshift-authentication_91417fe9-4f8b-4e92-8d34-7f05f41ef6c1_0(cb4d6713926992ee7b10d183c0bbad0134efb1b0934826d29d04a5c37e4eb19c): error adding pod openshift-authentication_oauth-openshift-b5bbf7b69-9x9sp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cb4d6713926992ee7b10d183c0bbad0134efb1b0934826d29d04a5c37e4eb19c" Netns:"/var/run/netns/21d66dc8-9d7b-4495-97d5-09001a6fe6ff" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-b5bbf7b69-9x9sp;K8S_POD_INFRA_CONTAINER_ID=cb4d6713926992ee7b10d183c0bbad0134efb1b0934826d29d04a5c37e4eb19c;K8S_POD_UID=91417fe9-4f8b-4e92-8d34-7f05f41ef6c1" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp] networking: Multus: [openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-b5bbf7b69-9x9sp in out of cluster comm: pod "oauth-openshift-b5bbf7b69-9x9sp" not found Jan 29 08:45:58 crc kubenswrapper[4895]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 08:45:58 crc kubenswrapper[4895]: > pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:58 crc kubenswrapper[4895]: E0129 08:45:58.152850 4895 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 29 08:45:58 crc kubenswrapper[4895]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-b5bbf7b69-9x9sp_openshift-authentication_91417fe9-4f8b-4e92-8d34-7f05f41ef6c1_0(cb4d6713926992ee7b10d183c0bbad0134efb1b0934826d29d04a5c37e4eb19c): error adding pod openshift-authentication_oauth-openshift-b5bbf7b69-9x9sp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cb4d6713926992ee7b10d183c0bbad0134efb1b0934826d29d04a5c37e4eb19c" Netns:"/var/run/netns/21d66dc8-9d7b-4495-97d5-09001a6fe6ff" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-b5bbf7b69-9x9sp;K8S_POD_INFRA_CONTAINER_ID=cb4d6713926992ee7b10d183c0bbad0134efb1b0934826d29d04a5c37e4eb19c;K8S_POD_UID=91417fe9-4f8b-4e92-8d34-7f05f41ef6c1" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp] networking: Multus: [openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-b5bbf7b69-9x9sp in out of cluster comm: pod "oauth-openshift-b5bbf7b69-9x9sp" not found Jan 29 08:45:58 crc kubenswrapper[4895]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 08:45:58 crc kubenswrapper[4895]: > pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:58 crc kubenswrapper[4895]: E0129 08:45:58.152948 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-b5bbf7b69-9x9sp_openshift-authentication(91417fe9-4f8b-4e92-8d34-7f05f41ef6c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-b5bbf7b69-9x9sp_openshift-authentication(91417fe9-4f8b-4e92-8d34-7f05f41ef6c1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-b5bbf7b69-9x9sp_openshift-authentication_91417fe9-4f8b-4e92-8d34-7f05f41ef6c1_0(cb4d6713926992ee7b10d183c0bbad0134efb1b0934826d29d04a5c37e4eb19c): error adding pod openshift-authentication_oauth-openshift-b5bbf7b69-9x9sp to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"cb4d6713926992ee7b10d183c0bbad0134efb1b0934826d29d04a5c37e4eb19c\\\" Netns:\\\"/var/run/netns/21d66dc8-9d7b-4495-97d5-09001a6fe6ff\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-b5bbf7b69-9x9sp;K8S_POD_INFRA_CONTAINER_ID=cb4d6713926992ee7b10d183c0bbad0134efb1b0934826d29d04a5c37e4eb19c;K8S_POD_UID=91417fe9-4f8b-4e92-8d34-7f05f41ef6c1\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp] networking: Multus: [openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp/91417fe9-4f8b-4e92-8d34-7f05f41ef6c1]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-b5bbf7b69-9x9sp in out of cluster comm: pod \\\"oauth-openshift-b5bbf7b69-9x9sp\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" podUID="91417fe9-4f8b-4e92-8d34-7f05f41ef6c1" Jan 29 08:45:58 crc kubenswrapper[4895]: I0129 08:45:58.184057 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 08:45:58 crc kubenswrapper[4895]: I0129 08:45:58.197951 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 08:45:58 crc kubenswrapper[4895]: I0129 08:45:58.217102 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 08:45:58 crc kubenswrapper[4895]: I0129 08:45:58.256722 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 08:45:58 crc kubenswrapper[4895]: I0129 08:45:58.343834 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 08:45:58 crc kubenswrapper[4895]: I0129 08:45:58.380024 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 08:45:58 crc kubenswrapper[4895]: I0129 08:45:58.463763 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 08:45:58 crc kubenswrapper[4895]: I0129 08:45:58.578032 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 08:45:58 crc kubenswrapper[4895]: I0129 08:45:58.932950 4895 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 08:45:58 crc kubenswrapper[4895]: I0129 08:45:58.940951 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 08:45:58 crc kubenswrapper[4895]: I0129 08:45:58.985804 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 08:45:59 crc kubenswrapper[4895]: I0129 08:45:59.004510 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:59 crc kubenswrapper[4895]: I0129 08:45:59.005169 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:45:59 crc kubenswrapper[4895]: I0129 08:45:59.105632 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 08:45:59 crc kubenswrapper[4895]: I0129 08:45:59.219877 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 08:45:59 crc kubenswrapper[4895]: I0129 08:45:59.415280 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 08:45:59 crc kubenswrapper[4895]: I0129 08:45:59.607311 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 08:45:59 crc kubenswrapper[4895]: I0129 08:45:59.675297 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 08:45:59 crc kubenswrapper[4895]: I0129 08:45:59.698332 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 08:45:59 crc kubenswrapper[4895]: I0129 08:45:59.745694 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 08:45:59 crc kubenswrapper[4895]: I0129 08:45:59.958893 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 08:46:00 crc kubenswrapper[4895]: I0129 08:46:00.039138 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 08:46:00 crc kubenswrapper[4895]: I0129 08:46:00.056894 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 08:46:00 crc kubenswrapper[4895]: I0129 08:46:00.202203 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 08:46:00 crc kubenswrapper[4895]: I0129 08:46:00.341023 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 08:46:00 crc kubenswrapper[4895]: I0129 08:46:00.392036 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 08:46:00 crc kubenswrapper[4895]: I0129 08:46:00.610549 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp"] Jan 29 08:46:00 crc kubenswrapper[4895]: I0129 08:46:00.685488 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 08:46:00 crc kubenswrapper[4895]: I0129 08:46:00.688420 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 08:46:00 crc kubenswrapper[4895]: I0129 08:46:00.709425 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 08:46:00 crc kubenswrapper[4895]: I0129 08:46:00.713186 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 08:46:00 crc kubenswrapper[4895]: I0129 08:46:00.739417 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 08:46:00 crc kubenswrapper[4895]: I0129 08:46:00.769189 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 08:46:00 crc kubenswrapper[4895]: I0129 08:46:00.961105 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 08:46:01 crc kubenswrapper[4895]: I0129 08:46:01.017707 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" event={"ID":"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1","Type":"ContainerStarted","Data":"34de24c374bb8f338e90a286a7e70a89c468127c54830c87e9805eced9e32e54"} Jan 29 08:46:01 crc kubenswrapper[4895]: I0129 08:46:01.017780 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" event={"ID":"91417fe9-4f8b-4e92-8d34-7f05f41ef6c1","Type":"ContainerStarted","Data":"3782043186da3e67a07f5983bc882342719efb1ac3cfd469dc03298e21d88645"} Jan 29 08:46:01 crc kubenswrapper[4895]: I0129 08:46:01.018300 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:46:01 crc kubenswrapper[4895]: I0129 08:46:01.072812 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 08:46:01 crc kubenswrapper[4895]: I0129 08:46:01.213370 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 08:46:01 crc kubenswrapper[4895]: I0129 08:46:01.403236 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 08:46:01 crc kubenswrapper[4895]: I0129 08:46:01.567448 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" Jan 29 08:46:01 crc kubenswrapper[4895]: I0129 08:46:01.593967 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-b5bbf7b69-9x9sp" podStartSLOduration=50.593937098 podStartE2EDuration="50.593937098s" podCreationTimestamp="2026-01-29 08:45:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:46:01.041632718 +0000 UTC m=+302.683140864" watchObservedRunningTime="2026-01-29 08:46:01.593937098 +0000 UTC m=+303.235445244" Jan 29 08:46:01 crc kubenswrapper[4895]: I0129 08:46:01.909626 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 08:46:02 crc kubenswrapper[4895]: I0129 08:46:02.094558 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 08:46:02 crc kubenswrapper[4895]: I0129 08:46:02.522526 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 08:46:02 crc kubenswrapper[4895]: E0129 08:46:02.559263 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-conmon-9b766a639aff81078c347213e8cd69ac8556b4131637ca065860b3413d431415.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-9b766a639aff81078c347213e8cd69ac8556b4131637ca065860b3413d431415.scope\": RecentStats: unable to find data in memory cache]" Jan 29 08:46:02 crc kubenswrapper[4895]: I0129 08:46:02.988998 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 08:46:02 crc kubenswrapper[4895]: I0129 08:46:02.989584 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.036056 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.036117 4895 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9b766a639aff81078c347213e8cd69ac8556b4131637ca065860b3413d431415" exitCode=137 Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.037047 4895 scope.go:117] "RemoveContainer" containerID="9b766a639aff81078c347213e8cd69ac8556b4131637ca065860b3413d431415" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.037047 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.052228 4895 scope.go:117] "RemoveContainer" containerID="9b766a639aff81078c347213e8cd69ac8556b4131637ca065860b3413d431415" Jan 29 08:46:03 crc kubenswrapper[4895]: E0129 08:46:03.052837 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b766a639aff81078c347213e8cd69ac8556b4131637ca065860b3413d431415\": container with ID starting with 9b766a639aff81078c347213e8cd69ac8556b4131637ca065860b3413d431415 not found: ID does not exist" containerID="9b766a639aff81078c347213e8cd69ac8556b4131637ca065860b3413d431415" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.052881 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b766a639aff81078c347213e8cd69ac8556b4131637ca065860b3413d431415"} err="failed to get container status \"9b766a639aff81078c347213e8cd69ac8556b4131637ca065860b3413d431415\": rpc error: code = NotFound desc = could not find container \"9b766a639aff81078c347213e8cd69ac8556b4131637ca065860b3413d431415\": container with ID starting with 9b766a639aff81078c347213e8cd69ac8556b4131637ca065860b3413d431415 not found: ID does not exist" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.076541 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.076606 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.076635 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.076662 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.076826 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.077199 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.077249 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.077270 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.077288 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.086414 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.179084 4895 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.179734 4895 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.179779 4895 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.179794 4895 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.179808 4895 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4895]: I0129 08:46:03.219349 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 08:46:13 crc kubenswrapper[4895]: I0129 08:46:13.126783 4895 generic.go:334] "Generic (PLEG): container finished" podID="8f6088a3-2691-4029-a576-2a5abcd3b107" containerID="3d32b1abbfba61aab5e4afb6d7b933ae7af1b19d8344190dde7ed4808d2bdf80" exitCode=0 Jan 29 08:46:13 crc kubenswrapper[4895]: I0129 08:46:13.126879 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" event={"ID":"8f6088a3-2691-4029-a576-2a5abcd3b107","Type":"ContainerDied","Data":"3d32b1abbfba61aab5e4afb6d7b933ae7af1b19d8344190dde7ed4808d2bdf80"} Jan 29 08:46:13 crc kubenswrapper[4895]: I0129 08:46:13.128171 4895 scope.go:117] "RemoveContainer" containerID="3d32b1abbfba61aab5e4afb6d7b933ae7af1b19d8344190dde7ed4808d2bdf80" Jan 29 08:46:14 crc kubenswrapper[4895]: I0129 08:46:14.137536 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" event={"ID":"8f6088a3-2691-4029-a576-2a5abcd3b107","Type":"ContainerStarted","Data":"91071f5ddb1fd06d97c8cad7580fb55c9c43db65a454a64101eb8fda2e81fdf8"} Jan 29 08:46:14 crc kubenswrapper[4895]: I0129 08:46:14.138361 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:46:14 crc kubenswrapper[4895]: I0129 08:46:14.140023 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:46:19 crc kubenswrapper[4895]: I0129 08:46:19.168210 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 08:46:19 crc kubenswrapper[4895]: I0129 08:46:19.170255 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 08:46:19 crc kubenswrapper[4895]: I0129 08:46:19.170297 4895 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cc834c31c8829d0938ffeb98bb5af2719656c638e9d073fe487d163c01a84319" exitCode=137 Jan 29 08:46:19 crc kubenswrapper[4895]: I0129 08:46:19.170335 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cc834c31c8829d0938ffeb98bb5af2719656c638e9d073fe487d163c01a84319"} Jan 29 08:46:19 crc kubenswrapper[4895]: I0129 08:46:19.170380 4895 scope.go:117] "RemoveContainer" containerID="0b1f1034f1052dc83a8c2a2c18bee9f871ccb70eae3f0d1b59a9ea0d25cb4d0c" Jan 29 08:46:20 crc kubenswrapper[4895]: I0129 08:46:20.179959 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 08:46:20 crc kubenswrapper[4895]: I0129 08:46:20.182042 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5b8f42044197980c4003757abfba8866b903f6e1ba86f85d6a9b41b073a6d1d8"} Jan 29 08:46:22 crc kubenswrapper[4895]: I0129 08:46:22.038400 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:46:28 crc kubenswrapper[4895]: I0129 08:46:28.595809 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:46:28 crc kubenswrapper[4895]: I0129 08:46:28.602802 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:46:29 crc kubenswrapper[4895]: I0129 08:46:29.243007 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 08:46:39 crc kubenswrapper[4895]: I0129 08:46:39.979625 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6696n"] Jan 29 08:46:39 crc kubenswrapper[4895]: I0129 08:46:39.980812 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" podUID="1e7bcbc7-bddb-42f5-915e-d020e66ddeeb" containerName="controller-manager" containerID="cri-o://7c3d17aedfda0a6c96c32d55676e6e48b4197899586b9e3dd3128247baafc62c" gracePeriod=30 Jan 29 08:46:39 crc kubenswrapper[4895]: I0129 08:46:39.991810 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv"] Jan 29 08:46:39 crc kubenswrapper[4895]: I0129 08:46:39.992074 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" podUID="26ddacfd-315a-46a3-a9a1-7149df69ef84" containerName="route-controller-manager" containerID="cri-o://a71989a159a51d3c473da1f43e4e7c8a6d7402c2d6c7380fc15aca43cb2441a4" gracePeriod=30 Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.120872 4895 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6696n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.121411 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" podUID="1e7bcbc7-bddb-42f5-915e-d020e66ddeeb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.312111 4895 generic.go:334] "Generic (PLEG): container finished" podID="1e7bcbc7-bddb-42f5-915e-d020e66ddeeb" containerID="7c3d17aedfda0a6c96c32d55676e6e48b4197899586b9e3dd3128247baafc62c" exitCode=0 Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.312205 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" event={"ID":"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb","Type":"ContainerDied","Data":"7c3d17aedfda0a6c96c32d55676e6e48b4197899586b9e3dd3128247baafc62c"} Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.313461 4895 generic.go:334] "Generic (PLEG): container finished" podID="26ddacfd-315a-46a3-a9a1-7149df69ef84" containerID="a71989a159a51d3c473da1f43e4e7c8a6d7402c2d6c7380fc15aca43cb2441a4" exitCode=0 Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.313494 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" event={"ID":"26ddacfd-315a-46a3-a9a1-7149df69ef84","Type":"ContainerDied","Data":"a71989a159a51d3c473da1f43e4e7c8a6d7402c2d6c7380fc15aca43cb2441a4"} Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.463165 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.550145 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.662874 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vdp8\" (UniqueName: \"kubernetes.io/projected/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-kube-api-access-4vdp8\") pod \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.663014 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26ddacfd-315a-46a3-a9a1-7149df69ef84-client-ca\") pod \"26ddacfd-315a-46a3-a9a1-7149df69ef84\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.663059 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26ddacfd-315a-46a3-a9a1-7149df69ef84-config\") pod \"26ddacfd-315a-46a3-a9a1-7149df69ef84\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.663086 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26ddacfd-315a-46a3-a9a1-7149df69ef84-serving-cert\") pod \"26ddacfd-315a-46a3-a9a1-7149df69ef84\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.663114 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zjlg\" (UniqueName: \"kubernetes.io/projected/26ddacfd-315a-46a3-a9a1-7149df69ef84-kube-api-access-5zjlg\") pod \"26ddacfd-315a-46a3-a9a1-7149df69ef84\" (UID: \"26ddacfd-315a-46a3-a9a1-7149df69ef84\") " Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.663149 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-client-ca\") pod \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.663165 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-serving-cert\") pod \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.663235 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-config\") pod \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.663257 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-proxy-ca-bundles\") pod \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\" (UID: \"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb\") " Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.664306 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1e7bcbc7-bddb-42f5-915e-d020e66ddeeb" (UID: "1e7bcbc7-bddb-42f5-915e-d020e66ddeeb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.665601 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ddacfd-315a-46a3-a9a1-7149df69ef84-client-ca" (OuterVolumeSpecName: "client-ca") pod "26ddacfd-315a-46a3-a9a1-7149df69ef84" (UID: "26ddacfd-315a-46a3-a9a1-7149df69ef84"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.665789 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ddacfd-315a-46a3-a9a1-7149df69ef84-config" (OuterVolumeSpecName: "config") pod "26ddacfd-315a-46a3-a9a1-7149df69ef84" (UID: "26ddacfd-315a-46a3-a9a1-7149df69ef84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.666118 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-client-ca" (OuterVolumeSpecName: "client-ca") pod "1e7bcbc7-bddb-42f5-915e-d020e66ddeeb" (UID: "1e7bcbc7-bddb-42f5-915e-d020e66ddeeb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.666142 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-config" (OuterVolumeSpecName: "config") pod "1e7bcbc7-bddb-42f5-915e-d020e66ddeeb" (UID: "1e7bcbc7-bddb-42f5-915e-d020e66ddeeb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.671749 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-kube-api-access-4vdp8" (OuterVolumeSpecName: "kube-api-access-4vdp8") pod "1e7bcbc7-bddb-42f5-915e-d020e66ddeeb" (UID: "1e7bcbc7-bddb-42f5-915e-d020e66ddeeb"). InnerVolumeSpecName "kube-api-access-4vdp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.671760 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ddacfd-315a-46a3-a9a1-7149df69ef84-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "26ddacfd-315a-46a3-a9a1-7149df69ef84" (UID: "26ddacfd-315a-46a3-a9a1-7149df69ef84"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.672672 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1e7bcbc7-bddb-42f5-915e-d020e66ddeeb" (UID: "1e7bcbc7-bddb-42f5-915e-d020e66ddeeb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.673542 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ddacfd-315a-46a3-a9a1-7149df69ef84-kube-api-access-5zjlg" (OuterVolumeSpecName: "kube-api-access-5zjlg") pod "26ddacfd-315a-46a3-a9a1-7149df69ef84" (UID: "26ddacfd-315a-46a3-a9a1-7149df69ef84"). InnerVolumeSpecName "kube-api-access-5zjlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.765353 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vdp8\" (UniqueName: \"kubernetes.io/projected/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-kube-api-access-4vdp8\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.765407 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26ddacfd-315a-46a3-a9a1-7149df69ef84-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.765438 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26ddacfd-315a-46a3-a9a1-7149df69ef84-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.765451 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26ddacfd-315a-46a3-a9a1-7149df69ef84-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.765463 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zjlg\" (UniqueName: \"kubernetes.io/projected/26ddacfd-315a-46a3-a9a1-7149df69ef84-kube-api-access-5zjlg\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.765476 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.765487 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.765497 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:40 crc kubenswrapper[4895]: I0129 08:46:40.765508 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.118693 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn"] Jan 29 08:46:41 crc kubenswrapper[4895]: E0129 08:46:41.119067 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ddacfd-315a-46a3-a9a1-7149df69ef84" containerName="route-controller-manager" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.119085 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ddacfd-315a-46a3-a9a1-7149df69ef84" containerName="route-controller-manager" Jan 29 08:46:41 crc kubenswrapper[4895]: E0129 08:46:41.119111 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7bcbc7-bddb-42f5-915e-d020e66ddeeb" containerName="controller-manager" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.119120 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7bcbc7-bddb-42f5-915e-d020e66ddeeb" containerName="controller-manager" Jan 29 08:46:41 crc kubenswrapper[4895]: E0129 08:46:41.119132 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.119139 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.119247 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e7bcbc7-bddb-42f5-915e-d020e66ddeeb" containerName="controller-manager" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.119260 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ddacfd-315a-46a3-a9a1-7149df69ef84" containerName="route-controller-manager" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.119270 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.119807 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.123750 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57cf459cc5-2fbdq"] Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.124807 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.133241 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57cf459cc5-2fbdq"] Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.136627 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn"] Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.271588 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-client-ca\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.271658 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfsc6\" (UniqueName: \"kubernetes.io/projected/ec8da3b6-c917-4445-9144-4057ec38491a-kube-api-access-pfsc6\") pod \"route-controller-manager-7b9cdbf977-8nlfn\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.271696 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-config\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.271718 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5spm8\" (UniqueName: \"kubernetes.io/projected/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-kube-api-access-5spm8\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.271740 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec8da3b6-c917-4445-9144-4057ec38491a-client-ca\") pod \"route-controller-manager-7b9cdbf977-8nlfn\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.271759 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec8da3b6-c917-4445-9144-4057ec38491a-serving-cert\") pod \"route-controller-manager-7b9cdbf977-8nlfn\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.271775 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-proxy-ca-bundles\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.271791 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-serving-cert\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.271821 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec8da3b6-c917-4445-9144-4057ec38491a-config\") pod \"route-controller-manager-7b9cdbf977-8nlfn\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.322358 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" event={"ID":"26ddacfd-315a-46a3-a9a1-7149df69ef84","Type":"ContainerDied","Data":"e8342b2ef985d0c9d4a04bc39693ff605146ab3057b172e40e64a6668652994a"} Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.322427 4895 scope.go:117] "RemoveContainer" containerID="a71989a159a51d3c473da1f43e4e7c8a6d7402c2d6c7380fc15aca43cb2441a4" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.322573 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.325934 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" event={"ID":"1e7bcbc7-bddb-42f5-915e-d020e66ddeeb","Type":"ContainerDied","Data":"dee3c2c17e48676793a954c9e989ac67f8dd983bb4d35988c574fe234a0d7443"} Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.326048 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6696n" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.343421 4895 scope.go:117] "RemoveContainer" containerID="7c3d17aedfda0a6c96c32d55676e6e48b4197899586b9e3dd3128247baafc62c" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.345972 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv"] Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.351547 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bc7pv"] Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.356427 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6696n"] Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.361556 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6696n"] Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.372953 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-serving-cert\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.373296 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec8da3b6-c917-4445-9144-4057ec38491a-config\") pod \"route-controller-manager-7b9cdbf977-8nlfn\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.373417 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-client-ca\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.373551 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfsc6\" (UniqueName: \"kubernetes.io/projected/ec8da3b6-c917-4445-9144-4057ec38491a-kube-api-access-pfsc6\") pod \"route-controller-manager-7b9cdbf977-8nlfn\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.374055 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-config\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.374188 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5spm8\" (UniqueName: \"kubernetes.io/projected/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-kube-api-access-5spm8\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.374289 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec8da3b6-c917-4445-9144-4057ec38491a-client-ca\") pod \"route-controller-manager-7b9cdbf977-8nlfn\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.374377 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec8da3b6-c917-4445-9144-4057ec38491a-serving-cert\") pod \"route-controller-manager-7b9cdbf977-8nlfn\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.374472 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-proxy-ca-bundles\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.374662 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-client-ca\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.375215 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec8da3b6-c917-4445-9144-4057ec38491a-client-ca\") pod \"route-controller-manager-7b9cdbf977-8nlfn\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.375246 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec8da3b6-c917-4445-9144-4057ec38491a-config\") pod \"route-controller-manager-7b9cdbf977-8nlfn\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.376899 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-proxy-ca-bundles\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.377996 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-config\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.380049 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec8da3b6-c917-4445-9144-4057ec38491a-serving-cert\") pod \"route-controller-manager-7b9cdbf977-8nlfn\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.380063 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-serving-cert\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.406637 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5spm8\" (UniqueName: \"kubernetes.io/projected/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-kube-api-access-5spm8\") pod \"controller-manager-57cf459cc5-2fbdq\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.411752 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfsc6\" (UniqueName: \"kubernetes.io/projected/ec8da3b6-c917-4445-9144-4057ec38491a-kube-api-access-pfsc6\") pod \"route-controller-manager-7b9cdbf977-8nlfn\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.436960 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.470358 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.702093 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn"] Jan 29 08:46:41 crc kubenswrapper[4895]: I0129 08:46:41.762478 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57cf459cc5-2fbdq"] Jan 29 08:46:41 crc kubenswrapper[4895]: W0129 08:46:41.786397 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c2c7b6_e2ee_4423_a410_61c6b1ccca7b.slice/crio-82fe0339771530654d9bdad7cfcc48b300d45fe3d3ec263c2a149103db617dd1 WatchSource:0}: Error finding container 82fe0339771530654d9bdad7cfcc48b300d45fe3d3ec263c2a149103db617dd1: Status 404 returned error can't find the container with id 82fe0339771530654d9bdad7cfcc48b300d45fe3d3ec263c2a149103db617dd1 Jan 29 08:46:42 crc kubenswrapper[4895]: I0129 08:46:42.359678 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" event={"ID":"ec8da3b6-c917-4445-9144-4057ec38491a","Type":"ContainerStarted","Data":"f7d785b53704a689318302ada3c80ba427ab293a4e442c1c075080f6f8eb3f83"} Jan 29 08:46:42 crc kubenswrapper[4895]: I0129 08:46:42.361557 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:42 crc kubenswrapper[4895]: I0129 08:46:42.361670 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" event={"ID":"ec8da3b6-c917-4445-9144-4057ec38491a","Type":"ContainerStarted","Data":"8d227b57f9804c7a50b6166fd5c4a2ed01404cf16097de70afd927c534c2c0d1"} Jan 29 08:46:42 crc kubenswrapper[4895]: I0129 08:46:42.364619 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" event={"ID":"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b","Type":"ContainerStarted","Data":"8b30119b09e8f8dd612aef0318b0efa5012ef4a3db6cdb0370ec2e59b66109c2"} Jan 29 08:46:42 crc kubenswrapper[4895]: I0129 08:46:42.364669 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" event={"ID":"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b","Type":"ContainerStarted","Data":"82fe0339771530654d9bdad7cfcc48b300d45fe3d3ec263c2a149103db617dd1"} Jan 29 08:46:42 crc kubenswrapper[4895]: I0129 08:46:42.364846 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:42 crc kubenswrapper[4895]: I0129 08:46:42.369850 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:42 crc kubenswrapper[4895]: I0129 08:46:42.371599 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:42 crc kubenswrapper[4895]: I0129 08:46:42.417994 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" podStartSLOduration=2.417972692 podStartE2EDuration="2.417972692s" podCreationTimestamp="2026-01-29 08:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:46:42.417200811 +0000 UTC m=+344.058708957" watchObservedRunningTime="2026-01-29 08:46:42.417972692 +0000 UTC m=+344.059480838" Jan 29 08:46:42 crc kubenswrapper[4895]: I0129 08:46:42.537593 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" podStartSLOduration=2.537571524 podStartE2EDuration="2.537571524s" podCreationTimestamp="2026-01-29 08:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:46:42.535532848 +0000 UTC m=+344.177040994" watchObservedRunningTime="2026-01-29 08:46:42.537571524 +0000 UTC m=+344.179079670" Jan 29 08:46:43 crc kubenswrapper[4895]: I0129 08:46:43.219903 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e7bcbc7-bddb-42f5-915e-d020e66ddeeb" path="/var/lib/kubelet/pods/1e7bcbc7-bddb-42f5-915e-d020e66ddeeb/volumes" Jan 29 08:46:43 crc kubenswrapper[4895]: I0129 08:46:43.221272 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ddacfd-315a-46a3-a9a1-7149df69ef84" path="/var/lib/kubelet/pods/26ddacfd-315a-46a3-a9a1-7149df69ef84/volumes" Jan 29 08:46:45 crc kubenswrapper[4895]: I0129 08:46:45.599816 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57cf459cc5-2fbdq"] Jan 29 08:46:45 crc kubenswrapper[4895]: I0129 08:46:45.600830 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" podUID="13c2c7b6-e2ee-4423-a410-61c6b1ccca7b" containerName="controller-manager" containerID="cri-o://8b30119b09e8f8dd612aef0318b0efa5012ef4a3db6cdb0370ec2e59b66109c2" gracePeriod=30 Jan 29 08:46:45 crc kubenswrapper[4895]: I0129 08:46:45.616014 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn"] Jan 29 08:46:45 crc kubenswrapper[4895]: I0129 08:46:45.616331 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" podUID="ec8da3b6-c917-4445-9144-4057ec38491a" containerName="route-controller-manager" containerID="cri-o://f7d785b53704a689318302ada3c80ba427ab293a4e442c1c075080f6f8eb3f83" gracePeriod=30 Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.141252 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.233446 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.246272 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec8da3b6-c917-4445-9144-4057ec38491a-serving-cert\") pod \"ec8da3b6-c917-4445-9144-4057ec38491a\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.246363 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec8da3b6-c917-4445-9144-4057ec38491a-config\") pod \"ec8da3b6-c917-4445-9144-4057ec38491a\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.246393 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfsc6\" (UniqueName: \"kubernetes.io/projected/ec8da3b6-c917-4445-9144-4057ec38491a-kube-api-access-pfsc6\") pod \"ec8da3b6-c917-4445-9144-4057ec38491a\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.246441 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec8da3b6-c917-4445-9144-4057ec38491a-client-ca\") pod \"ec8da3b6-c917-4445-9144-4057ec38491a\" (UID: \"ec8da3b6-c917-4445-9144-4057ec38491a\") " Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.247727 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec8da3b6-c917-4445-9144-4057ec38491a-client-ca" (OuterVolumeSpecName: "client-ca") pod "ec8da3b6-c917-4445-9144-4057ec38491a" (UID: "ec8da3b6-c917-4445-9144-4057ec38491a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.247905 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec8da3b6-c917-4445-9144-4057ec38491a-config" (OuterVolumeSpecName: "config") pod "ec8da3b6-c917-4445-9144-4057ec38491a" (UID: "ec8da3b6-c917-4445-9144-4057ec38491a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.259878 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8da3b6-c917-4445-9144-4057ec38491a-kube-api-access-pfsc6" (OuterVolumeSpecName: "kube-api-access-pfsc6") pod "ec8da3b6-c917-4445-9144-4057ec38491a" (UID: "ec8da3b6-c917-4445-9144-4057ec38491a"). InnerVolumeSpecName "kube-api-access-pfsc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.269551 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec8da3b6-c917-4445-9144-4057ec38491a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ec8da3b6-c917-4445-9144-4057ec38491a" (UID: "ec8da3b6-c917-4445-9144-4057ec38491a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.276539 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2d5f8"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.276865 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2d5f8" podUID="b8e96926-7c32-4a64-b37d-342a66d925ea" containerName="registry-server" containerID="cri-o://ccc2f3063f3215250c490b43ab7f2ee8d0e8a996fcc377d7ffd5e55e7d09dc26" gracePeriod=30 Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.294945 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nfrjk"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.295351 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nfrjk" podUID="0349d46c-bf39-4ba0-99be-22445866386b" containerName="registry-server" containerID="cri-o://1c9eab74cb14f86f36101dc1b05f62f97b33a04cda6ce5013d756e615e14ee74" gracePeriod=30 Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.305750 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8p6f"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.306213 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c8p6f" podUID="a0b31a89-9993-4996-8b19-961efcb757ed" containerName="registry-server" containerID="cri-o://e3b43f532aebe92c13578a1ffc6b49bc0fde97c31ecbf38278971fb211e925ee" gracePeriod=30 Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.319674 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l25mt"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.320072 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l25mt" podUID="886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" containerName="registry-server" containerID="cri-o://3588427658f554bd25430880d41b9d6d518bec086b2fe03d3bab1d50b0d1607a" gracePeriod=30 Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.323771 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rmbm8"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.324112 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" podUID="8f6088a3-2691-4029-a576-2a5abcd3b107" containerName="marketplace-operator" containerID="cri-o://91071f5ddb1fd06d97c8cad7580fb55c9c43db65a454a64101eb8fda2e81fdf8" gracePeriod=30 Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.338698 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-85kbq"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.339089 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-85kbq" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" containerName="registry-server" containerID="cri-o://09e45f33a61901d71551ee11fd1f805ceedbe67eb4e47d1515816e4aa4aceef6" gracePeriod=30 Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.347454 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-config\") pod \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.347637 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5spm8\" (UniqueName: \"kubernetes.io/projected/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-kube-api-access-5spm8\") pod \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.347692 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-client-ca\") pod \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.347717 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-proxy-ca-bundles\") pod \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.347770 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-serving-cert\") pod \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\" (UID: \"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b\") " Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.348143 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec8da3b6-c917-4445-9144-4057ec38491a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.348161 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec8da3b6-c917-4445-9144-4057ec38491a-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.348174 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfsc6\" (UniqueName: \"kubernetes.io/projected/ec8da3b6-c917-4445-9144-4057ec38491a-kube-api-access-pfsc6\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.348187 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec8da3b6-c917-4445-9144-4057ec38491a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.350740 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-client-ca" (OuterVolumeSpecName: "client-ca") pod "13c2c7b6-e2ee-4423-a410-61c6b1ccca7b" (UID: "13c2c7b6-e2ee-4423-a410-61c6b1ccca7b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.350852 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-config" (OuterVolumeSpecName: "config") pod "13c2c7b6-e2ee-4423-a410-61c6b1ccca7b" (UID: "13c2c7b6-e2ee-4423-a410-61c6b1ccca7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.353825 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "13c2c7b6-e2ee-4423-a410-61c6b1ccca7b" (UID: "13c2c7b6-e2ee-4423-a410-61c6b1ccca7b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.358443 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "13c2c7b6-e2ee-4423-a410-61c6b1ccca7b" (UID: "13c2c7b6-e2ee-4423-a410-61c6b1ccca7b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.359273 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-kube-api-access-5spm8" (OuterVolumeSpecName: "kube-api-access-5spm8") pod "13c2c7b6-e2ee-4423-a410-61c6b1ccca7b" (UID: "13c2c7b6-e2ee-4423-a410-61c6b1ccca7b"). InnerVolumeSpecName "kube-api-access-5spm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.361118 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtxmb"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.361502 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wtxmb" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" containerName="registry-server" containerID="cri-o://656df0532adeb13cf6c9afb1413908ef0292d26089195e1975a71e8d8ff4a60d" gracePeriod=30 Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.365864 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b2r4l"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.366219 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b2r4l" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" containerName="registry-server" containerID="cri-o://53db58cfb6c74beea66833f15eb593bc28e54bf9049caad3e18c286b6025b9f2" gracePeriod=30 Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.382247 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xgjxz"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.382613 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xgjxz" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" containerName="registry-server" containerID="cri-o://1ab528a4a5914d257629425f8b1afc78c2957ecb174b95287edf6687fe113861" gracePeriod=30 Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.394098 4895 generic.go:334] "Generic (PLEG): container finished" podID="13c2c7b6-e2ee-4423-a410-61c6b1ccca7b" containerID="8b30119b09e8f8dd612aef0318b0efa5012ef4a3db6cdb0370ec2e59b66109c2" exitCode=0 Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.394189 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" event={"ID":"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b","Type":"ContainerDied","Data":"8b30119b09e8f8dd612aef0318b0efa5012ef4a3db6cdb0370ec2e59b66109c2"} Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.394231 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" event={"ID":"13c2c7b6-e2ee-4423-a410-61c6b1ccca7b","Type":"ContainerDied","Data":"82fe0339771530654d9bdad7cfcc48b300d45fe3d3ec263c2a149103db617dd1"} Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.394254 4895 scope.go:117] "RemoveContainer" containerID="8b30119b09e8f8dd612aef0318b0efa5012ef4a3db6cdb0370ec2e59b66109c2" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.394456 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57cf459cc5-2fbdq" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.399857 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cdbdn"] Jan 29 08:46:46 crc kubenswrapper[4895]: E0129 08:46:46.401205 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c2c7b6-e2ee-4423-a410-61c6b1ccca7b" containerName="controller-manager" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.401234 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c2c7b6-e2ee-4423-a410-61c6b1ccca7b" containerName="controller-manager" Jan 29 08:46:46 crc kubenswrapper[4895]: E0129 08:46:46.401271 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8da3b6-c917-4445-9144-4057ec38491a" containerName="route-controller-manager" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.401280 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8da3b6-c917-4445-9144-4057ec38491a" containerName="route-controller-manager" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.401664 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="13c2c7b6-e2ee-4423-a410-61c6b1ccca7b" containerName="controller-manager" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.401692 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec8da3b6-c917-4445-9144-4057ec38491a" containerName="route-controller-manager" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.402309 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.404852 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cdbdn"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.414860 4895 generic.go:334] "Generic (PLEG): container finished" podID="ec8da3b6-c917-4445-9144-4057ec38491a" containerID="f7d785b53704a689318302ada3c80ba427ab293a4e442c1c075080f6f8eb3f83" exitCode=0 Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.414937 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" event={"ID":"ec8da3b6-c917-4445-9144-4057ec38491a","Type":"ContainerDied","Data":"f7d785b53704a689318302ada3c80ba427ab293a4e442c1c075080f6f8eb3f83"} Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.414976 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" event={"ID":"ec8da3b6-c917-4445-9144-4057ec38491a","Type":"ContainerDied","Data":"8d227b57f9804c7a50b6166fd5c4a2ed01404cf16097de70afd927c534c2c0d1"} Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.415052 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.451387 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.451421 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5spm8\" (UniqueName: \"kubernetes.io/projected/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-kube-api-access-5spm8\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.451435 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.451448 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.451460 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.538238 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2d5f8"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.553300 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/763fcf96-02dd-48dd-a5b0-40714be2a672-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cdbdn\" (UID: \"763fcf96-02dd-48dd-a5b0-40714be2a672\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.553356 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dnvj\" (UniqueName: \"kubernetes.io/projected/763fcf96-02dd-48dd-a5b0-40714be2a672-kube-api-access-4dnvj\") pod \"marketplace-operator-79b997595-cdbdn\" (UID: \"763fcf96-02dd-48dd-a5b0-40714be2a672\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.553406 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/763fcf96-02dd-48dd-a5b0-40714be2a672-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cdbdn\" (UID: \"763fcf96-02dd-48dd-a5b0-40714be2a672\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.654934 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/763fcf96-02dd-48dd-a5b0-40714be2a672-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cdbdn\" (UID: \"763fcf96-02dd-48dd-a5b0-40714be2a672\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.655714 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/763fcf96-02dd-48dd-a5b0-40714be2a672-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cdbdn\" (UID: \"763fcf96-02dd-48dd-a5b0-40714be2a672\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.655744 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dnvj\" (UniqueName: \"kubernetes.io/projected/763fcf96-02dd-48dd-a5b0-40714be2a672-kube-api-access-4dnvj\") pod \"marketplace-operator-79b997595-cdbdn\" (UID: \"763fcf96-02dd-48dd-a5b0-40714be2a672\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.657319 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/763fcf96-02dd-48dd-a5b0-40714be2a672-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cdbdn\" (UID: \"763fcf96-02dd-48dd-a5b0-40714be2a672\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.666357 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/763fcf96-02dd-48dd-a5b0-40714be2a672-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cdbdn\" (UID: \"763fcf96-02dd-48dd-a5b0-40714be2a672\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.676627 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dnvj\" (UniqueName: \"kubernetes.io/projected/763fcf96-02dd-48dd-a5b0-40714be2a672-kube-api-access-4dnvj\") pod \"marketplace-operator-79b997595-cdbdn\" (UID: \"763fcf96-02dd-48dd-a5b0-40714be2a672\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.725391 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l25mt"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.759552 4895 scope.go:117] "RemoveContainer" containerID="8b30119b09e8f8dd612aef0318b0efa5012ef4a3db6cdb0370ec2e59b66109c2" Jan 29 08:46:46 crc kubenswrapper[4895]: E0129 08:46:46.760212 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b30119b09e8f8dd612aef0318b0efa5012ef4a3db6cdb0370ec2e59b66109c2\": container with ID starting with 8b30119b09e8f8dd612aef0318b0efa5012ef4a3db6cdb0370ec2e59b66109c2 not found: ID does not exist" containerID="8b30119b09e8f8dd612aef0318b0efa5012ef4a3db6cdb0370ec2e59b66109c2" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.760260 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b30119b09e8f8dd612aef0318b0efa5012ef4a3db6cdb0370ec2e59b66109c2"} err="failed to get container status \"8b30119b09e8f8dd612aef0318b0efa5012ef4a3db6cdb0370ec2e59b66109c2\": rpc error: code = NotFound desc = could not find container \"8b30119b09e8f8dd612aef0318b0efa5012ef4a3db6cdb0370ec2e59b66109c2\": container with ID starting with 8b30119b09e8f8dd612aef0318b0efa5012ef4a3db6cdb0370ec2e59b66109c2 not found: ID does not exist" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.760297 4895 scope.go:117] "RemoveContainer" containerID="f7d785b53704a689318302ada3c80ba427ab293a4e442c1c075080f6f8eb3f83" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.878488 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.888250 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57cf459cc5-2fbdq"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.888587 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.902472 4895 scope.go:117] "RemoveContainer" containerID="f7d785b53704a689318302ada3c80ba427ab293a4e442c1c075080f6f8eb3f83" Jan 29 08:46:46 crc kubenswrapper[4895]: E0129 08:46:46.903735 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7d785b53704a689318302ada3c80ba427ab293a4e442c1c075080f6f8eb3f83\": container with ID starting with f7d785b53704a689318302ada3c80ba427ab293a4e442c1c075080f6f8eb3f83 not found: ID does not exist" containerID="f7d785b53704a689318302ada3c80ba427ab293a4e442c1c075080f6f8eb3f83" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.903801 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7d785b53704a689318302ada3c80ba427ab293a4e442c1c075080f6f8eb3f83"} err="failed to get container status \"f7d785b53704a689318302ada3c80ba427ab293a4e442c1c075080f6f8eb3f83\": rpc error: code = NotFound desc = could not find container \"f7d785b53704a689318302ada3c80ba427ab293a4e442c1c075080f6f8eb3f83\": container with ID starting with f7d785b53704a689318302ada3c80ba427ab293a4e442c1c075080f6f8eb3f83 not found: ID does not exist" Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.905186 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-57cf459cc5-2fbdq"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.910464 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.918561 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b9cdbf977-8nlfn"] Jan 29 08:46:46 crc kubenswrapper[4895]: I0129 08:46:46.977416 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.006136 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.015465 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.016320 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.050051 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.055734 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.060110 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8e96926-7c32-4a64-b37d-342a66d925ea-catalog-content\") pod \"b8e96926-7c32-4a64-b37d-342a66d925ea\" (UID: \"b8e96926-7c32-4a64-b37d-342a66d925ea\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.060692 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khfjk\" (UniqueName: \"kubernetes.io/projected/3eb3534d-8971-42ec-8aaf-a970b786e631-kube-api-access-khfjk\") pod \"3eb3534d-8971-42ec-8aaf-a970b786e631\" (UID: \"3eb3534d-8971-42ec-8aaf-a970b786e631\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.061441 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b31a89-9993-4996-8b19-961efcb757ed-utilities\") pod \"a0b31a89-9993-4996-8b19-961efcb757ed\" (UID: \"a0b31a89-9993-4996-8b19-961efcb757ed\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.061486 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b31a89-9993-4996-8b19-961efcb757ed-catalog-content\") pod \"a0b31a89-9993-4996-8b19-961efcb757ed\" (UID: \"a0b31a89-9993-4996-8b19-961efcb757ed\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.061512 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8e96926-7c32-4a64-b37d-342a66d925ea-utilities\") pod \"b8e96926-7c32-4a64-b37d-342a66d925ea\" (UID: \"b8e96926-7c32-4a64-b37d-342a66d925ea\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.061543 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0349d46c-bf39-4ba0-99be-22445866386b-utilities\") pod \"0349d46c-bf39-4ba0-99be-22445866386b\" (UID: \"0349d46c-bf39-4ba0-99be-22445866386b\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.061568 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb3534d-8971-42ec-8aaf-a970b786e631-utilities\") pod \"3eb3534d-8971-42ec-8aaf-a970b786e631\" (UID: \"3eb3534d-8971-42ec-8aaf-a970b786e631\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.061600 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-catalog-content\") pod \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\" (UID: \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.061630 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkl55\" (UniqueName: \"kubernetes.io/projected/b8e96926-7c32-4a64-b37d-342a66d925ea-kube-api-access-bkl55\") pod \"b8e96926-7c32-4a64-b37d-342a66d925ea\" (UID: \"b8e96926-7c32-4a64-b37d-342a66d925ea\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.061664 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f6088a3-2691-4029-a576-2a5abcd3b107-marketplace-trusted-ca\") pod \"8f6088a3-2691-4029-a576-2a5abcd3b107\" (UID: \"8f6088a3-2691-4029-a576-2a5abcd3b107\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.061685 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-utilities\") pod \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\" (UID: \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.061712 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtsfn\" (UniqueName: \"kubernetes.io/projected/a0b31a89-9993-4996-8b19-961efcb757ed-kube-api-access-rtsfn\") pod \"a0b31a89-9993-4996-8b19-961efcb757ed\" (UID: \"a0b31a89-9993-4996-8b19-961efcb757ed\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.061736 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-catalog-content\") pod \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\" (UID: \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.061763 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0349d46c-bf39-4ba0-99be-22445866386b-catalog-content\") pod \"0349d46c-bf39-4ba0-99be-22445866386b\" (UID: \"0349d46c-bf39-4ba0-99be-22445866386b\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.062489 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0b31a89-9993-4996-8b19-961efcb757ed-utilities" (OuterVolumeSpecName: "utilities") pod "a0b31a89-9993-4996-8b19-961efcb757ed" (UID: "a0b31a89-9993-4996-8b19-961efcb757ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.062994 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f6088a3-2691-4029-a576-2a5abcd3b107-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8f6088a3-2691-4029-a576-2a5abcd3b107" (UID: "8f6088a3-2691-4029-a576-2a5abcd3b107"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.063654 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb3534d-8971-42ec-8aaf-a970b786e631-utilities" (OuterVolumeSpecName: "utilities") pod "3eb3534d-8971-42ec-8aaf-a970b786e631" (UID: "3eb3534d-8971-42ec-8aaf-a970b786e631"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.065199 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8e96926-7c32-4a64-b37d-342a66d925ea-utilities" (OuterVolumeSpecName: "utilities") pod "b8e96926-7c32-4a64-b37d-342a66d925ea" (UID: "b8e96926-7c32-4a64-b37d-342a66d925ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.065935 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-utilities" (OuterVolumeSpecName: "utilities") pod "bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" (UID: "bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.076064 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b31a89-9993-4996-8b19-961efcb757ed-kube-api-access-rtsfn" (OuterVolumeSpecName: "kube-api-access-rtsfn") pod "a0b31a89-9993-4996-8b19-961efcb757ed" (UID: "a0b31a89-9993-4996-8b19-961efcb757ed"). InnerVolumeSpecName "kube-api-access-rtsfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.078302 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0349d46c-bf39-4ba0-99be-22445866386b-utilities" (OuterVolumeSpecName: "utilities") pod "0349d46c-bf39-4ba0-99be-22445866386b" (UID: "0349d46c-bf39-4ba0-99be-22445866386b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.081697 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e96926-7c32-4a64-b37d-342a66d925ea-kube-api-access-bkl55" (OuterVolumeSpecName: "kube-api-access-bkl55") pod "b8e96926-7c32-4a64-b37d-342a66d925ea" (UID: "b8e96926-7c32-4a64-b37d-342a66d925ea"). InnerVolumeSpecName "kube-api-access-bkl55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.086832 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb3534d-8971-42ec-8aaf-a970b786e631-kube-api-access-khfjk" (OuterVolumeSpecName: "kube-api-access-khfjk") pod "3eb3534d-8971-42ec-8aaf-a970b786e631" (UID: "3eb3534d-8971-42ec-8aaf-a970b786e631"). InnerVolumeSpecName "kube-api-access-khfjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.104203 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" (UID: "bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.108180 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130201 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m"] Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130606 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0349d46c-bf39-4ba0-99be-22445866386b" containerName="extract-content" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130624 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0349d46c-bf39-4ba0-99be-22445866386b" containerName="extract-content" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130633 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" containerName="extract-content" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130640 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" containerName="extract-content" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130650 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f6088a3-2691-4029-a576-2a5abcd3b107" containerName="marketplace-operator" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130658 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f6088a3-2691-4029-a576-2a5abcd3b107" containerName="marketplace-operator" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130670 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b31a89-9993-4996-8b19-961efcb757ed" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130676 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b31a89-9993-4996-8b19-961efcb757ed" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130686 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" containerName="extract-utilities" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130692 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" containerName="extract-utilities" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130701 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f6088a3-2691-4029-a576-2a5abcd3b107" containerName="marketplace-operator" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130708 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f6088a3-2691-4029-a576-2a5abcd3b107" containerName="marketplace-operator" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130715 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" containerName="extract-utilities" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130722 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" containerName="extract-utilities" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130732 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130741 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130753 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130759 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130772 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" containerName="extract-content" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130778 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" containerName="extract-content" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130787 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b31a89-9993-4996-8b19-961efcb757ed" containerName="extract-utilities" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130794 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b31a89-9993-4996-8b19-961efcb757ed" containerName="extract-utilities" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130812 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130821 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130827 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" containerName="extract-utilities" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130834 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" containerName="extract-utilities" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130842 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" containerName="extract-content" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130848 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" containerName="extract-content" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130858 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130866 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130875 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e96926-7c32-4a64-b37d-342a66d925ea" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130881 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e96926-7c32-4a64-b37d-342a66d925ea" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130890 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" containerName="extract-utilities" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130896 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" containerName="extract-utilities" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130904 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" containerName="extract-content" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130910 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" containerName="extract-content" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130933 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b31a89-9993-4996-8b19-961efcb757ed" containerName="extract-content" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130940 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b31a89-9993-4996-8b19-961efcb757ed" containerName="extract-content" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130950 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0349d46c-bf39-4ba0-99be-22445866386b" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130956 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0349d46c-bf39-4ba0-99be-22445866386b" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130963 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0349d46c-bf39-4ba0-99be-22445866386b" containerName="extract-utilities" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130970 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0349d46c-bf39-4ba0-99be-22445866386b" containerName="extract-utilities" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130979 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e96926-7c32-4a64-b37d-342a66d925ea" containerName="extract-content" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.130986 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e96926-7c32-4a64-b37d-342a66d925ea" containerName="extract-content" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.130993 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e96926-7c32-4a64-b37d-342a66d925ea" containerName="extract-utilities" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.131000 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e96926-7c32-4a64-b37d-342a66d925ea" containerName="extract-utilities" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.131113 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f6088a3-2691-4029-a576-2a5abcd3b107" containerName="marketplace-operator" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.131120 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.131130 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0349d46c-bf39-4ba0-99be-22445866386b" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.131138 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.131148 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e96926-7c32-4a64-b37d-342a66d925ea" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.131159 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b31a89-9993-4996-8b19-961efcb757ed" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.131167 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.131175 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" containerName="registry-server" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.131184 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f6088a3-2691-4029-a576-2a5abcd3b107" containerName="marketplace-operator" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.131809 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.138243 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.138560 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.138697 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.138822 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.139014 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.139130 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.140479 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.145619 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.145995 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.146149 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.146362 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.148317 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.148895 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.151668 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.151807 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.154758 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.175816 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.176599 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m99kq\" (UniqueName: \"kubernetes.io/projected/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-kube-api-access-m99kq\") pod \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\" (UID: \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.176636 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb3534d-8971-42ec-8aaf-a970b786e631-catalog-content\") pod \"3eb3534d-8971-42ec-8aaf-a970b786e631\" (UID: \"3eb3534d-8971-42ec-8aaf-a970b786e631\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.176658 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-utilities\") pod \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\" (UID: \"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.176680 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztwrd\" (UniqueName: \"kubernetes.io/projected/0349d46c-bf39-4ba0-99be-22445866386b-kube-api-access-ztwrd\") pod \"0349d46c-bf39-4ba0-99be-22445866386b\" (UID: \"0349d46c-bf39-4ba0-99be-22445866386b\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.176829 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ndgf\" (UniqueName: \"kubernetes.io/projected/8f6088a3-2691-4029-a576-2a5abcd3b107-kube-api-access-4ndgf\") pod \"8f6088a3-2691-4029-a576-2a5abcd3b107\" (UID: \"8f6088a3-2691-4029-a576-2a5abcd3b107\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.176859 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrgzm\" (UniqueName: \"kubernetes.io/projected/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-kube-api-access-wrgzm\") pod \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\" (UID: \"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.176880 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f6088a3-2691-4029-a576-2a5abcd3b107-marketplace-operator-metrics\") pod \"8f6088a3-2691-4029-a576-2a5abcd3b107\" (UID: \"8f6088a3-2691-4029-a576-2a5abcd3b107\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.178581 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-utilities" (OuterVolumeSpecName: "utilities") pod "886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" (UID: "886dfa02-5b87-4bdf-9bf5-fcd914ff2afb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179392 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-proxy-ca-bundles\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179470 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck78q\" (UniqueName: \"kubernetes.io/projected/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-kube-api-access-ck78q\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179501 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-config\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179601 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-serving-cert\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179630 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgltg\" (UniqueName: \"kubernetes.io/projected/0f4786eb-1530-40f3-ac26-668545f5e88f-kube-api-access-kgltg\") pod \"route-controller-manager-57cffcc444-fgj6m\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179661 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4786eb-1530-40f3-ac26-668545f5e88f-serving-cert\") pod \"route-controller-manager-57cffcc444-fgj6m\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179681 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4786eb-1530-40f3-ac26-668545f5e88f-config\") pod \"route-controller-manager-57cffcc444-fgj6m\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179714 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-client-ca\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179761 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f4786eb-1530-40f3-ac26-668545f5e88f-client-ca\") pod \"route-controller-manager-57cffcc444-fgj6m\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179838 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179852 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khfjk\" (UniqueName: \"kubernetes.io/projected/3eb3534d-8971-42ec-8aaf-a970b786e631-kube-api-access-khfjk\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179863 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b31a89-9993-4996-8b19-961efcb757ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179874 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8e96926-7c32-4a64-b37d-342a66d925ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179884 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0349d46c-bf39-4ba0-99be-22445866386b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179894 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb3534d-8971-42ec-8aaf-a970b786e631-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179905 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179933 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkl55\" (UniqueName: \"kubernetes.io/projected/b8e96926-7c32-4a64-b37d-342a66d925ea-kube-api-access-bkl55\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179943 4895 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f6088a3-2691-4029-a576-2a5abcd3b107-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179953 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.179962 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtsfn\" (UniqueName: \"kubernetes.io/projected/a0b31a89-9993-4996-8b19-961efcb757ed-kube-api-access-rtsfn\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.188653 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" (UID: "886dfa02-5b87-4bdf-9bf5-fcd914ff2afb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.190449 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-kube-api-access-wrgzm" (OuterVolumeSpecName: "kube-api-access-wrgzm") pod "bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" (UID: "bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8"). InnerVolumeSpecName "kube-api-access-wrgzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.190692 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0b31a89-9993-4996-8b19-961efcb757ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0b31a89-9993-4996-8b19-961efcb757ed" (UID: "a0b31a89-9993-4996-8b19-961efcb757ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.192884 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-kube-api-access-m99kq" (OuterVolumeSpecName: "kube-api-access-m99kq") pod "886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" (UID: "886dfa02-5b87-4bdf-9bf5-fcd914ff2afb"). InnerVolumeSpecName "kube-api-access-m99kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.193634 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0349d46c-bf39-4ba0-99be-22445866386b-kube-api-access-ztwrd" (OuterVolumeSpecName: "kube-api-access-ztwrd") pod "0349d46c-bf39-4ba0-99be-22445866386b" (UID: "0349d46c-bf39-4ba0-99be-22445866386b"). InnerVolumeSpecName "kube-api-access-ztwrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.195294 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6088a3-2691-4029-a576-2a5abcd3b107-kube-api-access-4ndgf" (OuterVolumeSpecName: "kube-api-access-4ndgf") pod "8f6088a3-2691-4029-a576-2a5abcd3b107" (UID: "8f6088a3-2691-4029-a576-2a5abcd3b107"). InnerVolumeSpecName "kube-api-access-4ndgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.201898 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8e96926-7c32-4a64-b37d-342a66d925ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8e96926-7c32-4a64-b37d-342a66d925ea" (UID: "b8e96926-7c32-4a64-b37d-342a66d925ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.204789 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6088a3-2691-4029-a576-2a5abcd3b107-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8f6088a3-2691-4029-a576-2a5abcd3b107" (UID: "8f6088a3-2691-4029-a576-2a5abcd3b107"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.220517 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0349d46c-bf39-4ba0-99be-22445866386b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0349d46c-bf39-4ba0-99be-22445866386b" (UID: "0349d46c-bf39-4ba0-99be-22445866386b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.244259 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13c2c7b6-e2ee-4423-a410-61c6b1ccca7b" path="/var/lib/kubelet/pods/13c2c7b6-e2ee-4423-a410-61c6b1ccca7b/volumes" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.248776 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec8da3b6-c917-4445-9144-4057ec38491a" path="/var/lib/kubelet/pods/ec8da3b6-c917-4445-9144-4057ec38491a/volumes" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.269412 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb3534d-8971-42ec-8aaf-a970b786e631-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3eb3534d-8971-42ec-8aaf-a970b786e631" (UID: "3eb3534d-8971-42ec-8aaf-a970b786e631"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.280733 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.280749 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkzlf\" (UniqueName: \"kubernetes.io/projected/c14af255-6e29-4bea-978b-8b5bf6285bd8-kube-api-access-qkzlf\") pod \"c14af255-6e29-4bea-978b-8b5bf6285bd8\" (UID: \"c14af255-6e29-4bea-978b-8b5bf6285bd8\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.281340 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c14af255-6e29-4bea-978b-8b5bf6285bd8-utilities\") pod \"c14af255-6e29-4bea-978b-8b5bf6285bd8\" (UID: \"c14af255-6e29-4bea-978b-8b5bf6285bd8\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.281405 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c14af255-6e29-4bea-978b-8b5bf6285bd8-catalog-content\") pod \"c14af255-6e29-4bea-978b-8b5bf6285bd8\" (UID: \"c14af255-6e29-4bea-978b-8b5bf6285bd8\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.281552 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4786eb-1530-40f3-ac26-668545f5e88f-config\") pod \"route-controller-manager-57cffcc444-fgj6m\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.281581 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-client-ca\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.282547 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c14af255-6e29-4bea-978b-8b5bf6285bd8-utilities" (OuterVolumeSpecName: "utilities") pod "c14af255-6e29-4bea-978b-8b5bf6285bd8" (UID: "c14af255-6e29-4bea-978b-8b5bf6285bd8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283049 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f4786eb-1530-40f3-ac26-668545f5e88f-client-ca\") pod \"route-controller-manager-57cffcc444-fgj6m\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283108 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4786eb-1530-40f3-ac26-668545f5e88f-config\") pod \"route-controller-manager-57cffcc444-fgj6m\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283156 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-proxy-ca-bundles\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283213 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck78q\" (UniqueName: \"kubernetes.io/projected/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-kube-api-access-ck78q\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283233 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-config\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283300 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-serving-cert\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283323 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgltg\" (UniqueName: \"kubernetes.io/projected/0f4786eb-1530-40f3-ac26-668545f5e88f-kube-api-access-kgltg\") pod \"route-controller-manager-57cffcc444-fgj6m\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283370 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4786eb-1530-40f3-ac26-668545f5e88f-serving-cert\") pod \"route-controller-manager-57cffcc444-fgj6m\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283453 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283484 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0349d46c-bf39-4ba0-99be-22445866386b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283496 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ndgf\" (UniqueName: \"kubernetes.io/projected/8f6088a3-2691-4029-a576-2a5abcd3b107-kube-api-access-4ndgf\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283508 4895 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f6088a3-2691-4029-a576-2a5abcd3b107-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283519 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrgzm\" (UniqueName: \"kubernetes.io/projected/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8-kube-api-access-wrgzm\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283530 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8e96926-7c32-4a64-b37d-342a66d925ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283541 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m99kq\" (UniqueName: \"kubernetes.io/projected/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb-kube-api-access-m99kq\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283553 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb3534d-8971-42ec-8aaf-a970b786e631-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283568 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztwrd\" (UniqueName: \"kubernetes.io/projected/0349d46c-bf39-4ba0-99be-22445866386b-kube-api-access-ztwrd\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283577 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b31a89-9993-4996-8b19-961efcb757ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283586 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c14af255-6e29-4bea-978b-8b5bf6285bd8-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.283837 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f4786eb-1530-40f3-ac26-668545f5e88f-client-ca\") pod \"route-controller-manager-57cffcc444-fgj6m\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.287212 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4786eb-1530-40f3-ac26-668545f5e88f-serving-cert\") pod \"route-controller-manager-57cffcc444-fgj6m\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.287680 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-client-ca\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.287771 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-config\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.289754 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c14af255-6e29-4bea-978b-8b5bf6285bd8-kube-api-access-qkzlf" (OuterVolumeSpecName: "kube-api-access-qkzlf") pod "c14af255-6e29-4bea-978b-8b5bf6285bd8" (UID: "c14af255-6e29-4bea-978b-8b5bf6285bd8"). InnerVolumeSpecName "kube-api-access-qkzlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.290283 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-serving-cert\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.293300 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-proxy-ca-bundles\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.309780 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgltg\" (UniqueName: \"kubernetes.io/projected/0f4786eb-1530-40f3-ac26-668545f5e88f-kube-api-access-kgltg\") pod \"route-controller-manager-57cffcc444-fgj6m\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.312580 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck78q\" (UniqueName: \"kubernetes.io/projected/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-kube-api-access-ck78q\") pod \"controller-manager-6ffd5d66b-7wxd9\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.385712 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkzlf\" (UniqueName: \"kubernetes.io/projected/c14af255-6e29-4bea-978b-8b5bf6285bd8-kube-api-access-qkzlf\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.430641 4895 generic.go:334] "Generic (PLEG): container finished" podID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" containerID="53db58cfb6c74beea66833f15eb593bc28e54bf9049caad3e18c286b6025b9f2" exitCode=0 Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.430707 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b2r4l" event={"ID":"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124","Type":"ContainerDied","Data":"53db58cfb6c74beea66833f15eb593bc28e54bf9049caad3e18c286b6025b9f2"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.430737 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b2r4l" event={"ID":"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124","Type":"ContainerDied","Data":"9b2eebf7b9469dfd1328c1eff016335dd45c27a0246193f8403bde27e0c8e481"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.430759 4895 scope.go:117] "RemoveContainer" containerID="53db58cfb6c74beea66833f15eb593bc28e54bf9049caad3e18c286b6025b9f2" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.430902 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b2r4l" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.436379 4895 generic.go:334] "Generic (PLEG): container finished" podID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" containerID="09e45f33a61901d71551ee11fd1f805ceedbe67eb4e47d1515816e4aa4aceef6" exitCode=0 Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.436430 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-85kbq" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.436474 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85kbq" event={"ID":"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8","Type":"ContainerDied","Data":"09e45f33a61901d71551ee11fd1f805ceedbe67eb4e47d1515816e4aa4aceef6"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.436525 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85kbq" event={"ID":"bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8","Type":"ContainerDied","Data":"9ee2c74fdeeb557eb39bad2302aa026dfe87aea192849bd8b76915ff18165e02"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.440826 4895 generic.go:334] "Generic (PLEG): container finished" podID="886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" containerID="3588427658f554bd25430880d41b9d6d518bec086b2fe03d3bab1d50b0d1607a" exitCode=0 Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.440881 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l25mt" event={"ID":"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb","Type":"ContainerDied","Data":"3588427658f554bd25430880d41b9d6d518bec086b2fe03d3bab1d50b0d1607a"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.440900 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l25mt" event={"ID":"886dfa02-5b87-4bdf-9bf5-fcd914ff2afb","Type":"ContainerDied","Data":"64c867c098b286972ccd47007db6511ecf118a0e7dced0709126f7308ff70125"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.443557 4895 generic.go:334] "Generic (PLEG): container finished" podID="c14af255-6e29-4bea-978b-8b5bf6285bd8" containerID="1ab528a4a5914d257629425f8b1afc78c2957ecb174b95287edf6687fe113861" exitCode=0 Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.443624 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgjxz" event={"ID":"c14af255-6e29-4bea-978b-8b5bf6285bd8","Type":"ContainerDied","Data":"1ab528a4a5914d257629425f8b1afc78c2957ecb174b95287edf6687fe113861"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.443650 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgjxz" event={"ID":"c14af255-6e29-4bea-978b-8b5bf6285bd8","Type":"ContainerDied","Data":"9a35eb9ff0cf940df20282c3d9e681bac5874a890a9130b098f32f415ff577e4"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.443729 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgjxz" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.445370 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l25mt" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.447577 4895 generic.go:334] "Generic (PLEG): container finished" podID="3eb3534d-8971-42ec-8aaf-a970b786e631" containerID="656df0532adeb13cf6c9afb1413908ef0292d26089195e1975a71e8d8ff4a60d" exitCode=0 Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.447739 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtxmb" event={"ID":"3eb3534d-8971-42ec-8aaf-a970b786e631","Type":"ContainerDied","Data":"656df0532adeb13cf6c9afb1413908ef0292d26089195e1975a71e8d8ff4a60d"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.447786 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wtxmb" event={"ID":"3eb3534d-8971-42ec-8aaf-a970b786e631","Type":"ContainerDied","Data":"ca9f2fd489e64420852037e14a5788202c3f7211c2d7cb90379424a98112daeb"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.447905 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wtxmb" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.452014 4895 scope.go:117] "RemoveContainer" containerID="a99acb26dd7a671163f81fba56de147fac20be8aaf15694217cf082ea9dc7cf2" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.453569 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c14af255-6e29-4bea-978b-8b5bf6285bd8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c14af255-6e29-4bea-978b-8b5bf6285bd8" (UID: "c14af255-6e29-4bea-978b-8b5bf6285bd8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.457786 4895 generic.go:334] "Generic (PLEG): container finished" podID="0349d46c-bf39-4ba0-99be-22445866386b" containerID="1c9eab74cb14f86f36101dc1b05f62f97b33a04cda6ce5013d756e615e14ee74" exitCode=0 Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.457896 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nfrjk" event={"ID":"0349d46c-bf39-4ba0-99be-22445866386b","Type":"ContainerDied","Data":"1c9eab74cb14f86f36101dc1b05f62f97b33a04cda6ce5013d756e615e14ee74"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.457880 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nfrjk" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.458826 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nfrjk" event={"ID":"0349d46c-bf39-4ba0-99be-22445866386b","Type":"ContainerDied","Data":"8405444788f55533c58471370a810b47db0fce372d2c52743ef496a45a7873c9"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.459971 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-85kbq"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.464346 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.465977 4895 generic.go:334] "Generic (PLEG): container finished" podID="a0b31a89-9993-4996-8b19-961efcb757ed" containerID="e3b43f532aebe92c13578a1ffc6b49bc0fde97c31ecbf38278971fb211e925ee" exitCode=0 Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.466130 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8p6f" event={"ID":"a0b31a89-9993-4996-8b19-961efcb757ed","Type":"ContainerDied","Data":"e3b43f532aebe92c13578a1ffc6b49bc0fde97c31ecbf38278971fb211e925ee"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.466166 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8p6f" event={"ID":"a0b31a89-9993-4996-8b19-961efcb757ed","Type":"ContainerDied","Data":"4cc7263fe7741a49ddca371c22dd6537efc466132608f6fe8762790778cc18c9"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.466086 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8p6f" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.467199 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-85kbq"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.476461 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.482281 4895 generic.go:334] "Generic (PLEG): container finished" podID="b8e96926-7c32-4a64-b37d-342a66d925ea" containerID="ccc2f3063f3215250c490b43ab7f2ee8d0e8a996fcc377d7ffd5e55e7d09dc26" exitCode=0 Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.482403 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2d5f8" event={"ID":"b8e96926-7c32-4a64-b37d-342a66d925ea","Type":"ContainerDied","Data":"ccc2f3063f3215250c490b43ab7f2ee8d0e8a996fcc377d7ffd5e55e7d09dc26"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.482438 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2d5f8" event={"ID":"b8e96926-7c32-4a64-b37d-342a66d925ea","Type":"ContainerDied","Data":"2d08763cf1982f8c64ded4f6ac54dacc982c229be8ab06e950855748f29c892f"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.482559 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2d5f8" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.482773 4895 scope.go:117] "RemoveContainer" containerID="abc17d4f507045f655e3172c35020c303d13c73c2bd99b061e186f41d5fabd99" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.486825 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-utilities\") pod \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\" (UID: \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.486894 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8tnz\" (UniqueName: \"kubernetes.io/projected/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-kube-api-access-j8tnz\") pod \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\" (UID: \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.487186 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-catalog-content\") pod \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\" (UID: \"a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124\") " Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.490377 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-utilities" (OuterVolumeSpecName: "utilities") pod "a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" (UID: "a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.491769 4895 generic.go:334] "Generic (PLEG): container finished" podID="8f6088a3-2691-4029-a576-2a5abcd3b107" containerID="91071f5ddb1fd06d97c8cad7580fb55c9c43db65a454a64101eb8fda2e81fdf8" exitCode=0 Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.491821 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" event={"ID":"8f6088a3-2691-4029-a576-2a5abcd3b107","Type":"ContainerDied","Data":"91071f5ddb1fd06d97c8cad7580fb55c9c43db65a454a64101eb8fda2e81fdf8"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.491858 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" event={"ID":"8f6088a3-2691-4029-a576-2a5abcd3b107","Type":"ContainerDied","Data":"2a567cb0889e34de37aecc6201de725a331be68760ceedf1425b6a1ba14924e9"} Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.492284 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rmbm8" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.500633 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l25mt"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.508902 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c14af255-6e29-4bea-978b-8b5bf6285bd8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.510682 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-kube-api-access-j8tnz" (OuterVolumeSpecName: "kube-api-access-j8tnz") pod "a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" (UID: "a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124"). InnerVolumeSpecName "kube-api-access-j8tnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.521742 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l25mt"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.549247 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nfrjk"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.558335 4895 scope.go:117] "RemoveContainer" containerID="53db58cfb6c74beea66833f15eb593bc28e54bf9049caad3e18c286b6025b9f2" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.558816 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53db58cfb6c74beea66833f15eb593bc28e54bf9049caad3e18c286b6025b9f2\": container with ID starting with 53db58cfb6c74beea66833f15eb593bc28e54bf9049caad3e18c286b6025b9f2 not found: ID does not exist" containerID="53db58cfb6c74beea66833f15eb593bc28e54bf9049caad3e18c286b6025b9f2" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.558854 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53db58cfb6c74beea66833f15eb593bc28e54bf9049caad3e18c286b6025b9f2"} err="failed to get container status \"53db58cfb6c74beea66833f15eb593bc28e54bf9049caad3e18c286b6025b9f2\": rpc error: code = NotFound desc = could not find container \"53db58cfb6c74beea66833f15eb593bc28e54bf9049caad3e18c286b6025b9f2\": container with ID starting with 53db58cfb6c74beea66833f15eb593bc28e54bf9049caad3e18c286b6025b9f2 not found: ID does not exist" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.558880 4895 scope.go:117] "RemoveContainer" containerID="a99acb26dd7a671163f81fba56de147fac20be8aaf15694217cf082ea9dc7cf2" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.559330 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a99acb26dd7a671163f81fba56de147fac20be8aaf15694217cf082ea9dc7cf2\": container with ID starting with a99acb26dd7a671163f81fba56de147fac20be8aaf15694217cf082ea9dc7cf2 not found: ID does not exist" containerID="a99acb26dd7a671163f81fba56de147fac20be8aaf15694217cf082ea9dc7cf2" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.559353 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99acb26dd7a671163f81fba56de147fac20be8aaf15694217cf082ea9dc7cf2"} err="failed to get container status \"a99acb26dd7a671163f81fba56de147fac20be8aaf15694217cf082ea9dc7cf2\": rpc error: code = NotFound desc = could not find container \"a99acb26dd7a671163f81fba56de147fac20be8aaf15694217cf082ea9dc7cf2\": container with ID starting with a99acb26dd7a671163f81fba56de147fac20be8aaf15694217cf082ea9dc7cf2 not found: ID does not exist" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.559366 4895 scope.go:117] "RemoveContainer" containerID="abc17d4f507045f655e3172c35020c303d13c73c2bd99b061e186f41d5fabd99" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.559575 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abc17d4f507045f655e3172c35020c303d13c73c2bd99b061e186f41d5fabd99\": container with ID starting with abc17d4f507045f655e3172c35020c303d13c73c2bd99b061e186f41d5fabd99 not found: ID does not exist" containerID="abc17d4f507045f655e3172c35020c303d13c73c2bd99b061e186f41d5fabd99" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.559594 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abc17d4f507045f655e3172c35020c303d13c73c2bd99b061e186f41d5fabd99"} err="failed to get container status \"abc17d4f507045f655e3172c35020c303d13c73c2bd99b061e186f41d5fabd99\": rpc error: code = NotFound desc = could not find container \"abc17d4f507045f655e3172c35020c303d13c73c2bd99b061e186f41d5fabd99\": container with ID starting with abc17d4f507045f655e3172c35020c303d13c73c2bd99b061e186f41d5fabd99 not found: ID does not exist" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.559607 4895 scope.go:117] "RemoveContainer" containerID="09e45f33a61901d71551ee11fd1f805ceedbe67eb4e47d1515816e4aa4aceef6" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.565251 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nfrjk"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.571191 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtxmb"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.579957 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wtxmb"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.583687 4895 scope.go:117] "RemoveContainer" containerID="eeda50a9c75e1244d10cf88d1300fc37b335b0cab8d1ceeceae25b8ed91758ca" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.584558 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2d5f8"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.589963 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2d5f8"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.595175 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8p6f"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.598777 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c8p6f"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.602171 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rmbm8"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.605238 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rmbm8"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.606083 4895 scope.go:117] "RemoveContainer" containerID="ec07d9d211714fe810519f87b111b9ebc115688661492029c032fba2aed09110" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.608064 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cdbdn"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.612283 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.612312 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8tnz\" (UniqueName: \"kubernetes.io/projected/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-kube-api-access-j8tnz\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.633218 4895 scope.go:117] "RemoveContainer" containerID="09e45f33a61901d71551ee11fd1f805ceedbe67eb4e47d1515816e4aa4aceef6" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.636510 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09e45f33a61901d71551ee11fd1f805ceedbe67eb4e47d1515816e4aa4aceef6\": container with ID starting with 09e45f33a61901d71551ee11fd1f805ceedbe67eb4e47d1515816e4aa4aceef6 not found: ID does not exist" containerID="09e45f33a61901d71551ee11fd1f805ceedbe67eb4e47d1515816e4aa4aceef6" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.636543 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09e45f33a61901d71551ee11fd1f805ceedbe67eb4e47d1515816e4aa4aceef6"} err="failed to get container status \"09e45f33a61901d71551ee11fd1f805ceedbe67eb4e47d1515816e4aa4aceef6\": rpc error: code = NotFound desc = could not find container \"09e45f33a61901d71551ee11fd1f805ceedbe67eb4e47d1515816e4aa4aceef6\": container with ID starting with 09e45f33a61901d71551ee11fd1f805ceedbe67eb4e47d1515816e4aa4aceef6 not found: ID does not exist" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.636576 4895 scope.go:117] "RemoveContainer" containerID="eeda50a9c75e1244d10cf88d1300fc37b335b0cab8d1ceeceae25b8ed91758ca" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.637596 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeda50a9c75e1244d10cf88d1300fc37b335b0cab8d1ceeceae25b8ed91758ca\": container with ID starting with eeda50a9c75e1244d10cf88d1300fc37b335b0cab8d1ceeceae25b8ed91758ca not found: ID does not exist" containerID="eeda50a9c75e1244d10cf88d1300fc37b335b0cab8d1ceeceae25b8ed91758ca" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.637621 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeda50a9c75e1244d10cf88d1300fc37b335b0cab8d1ceeceae25b8ed91758ca"} err="failed to get container status \"eeda50a9c75e1244d10cf88d1300fc37b335b0cab8d1ceeceae25b8ed91758ca\": rpc error: code = NotFound desc = could not find container \"eeda50a9c75e1244d10cf88d1300fc37b335b0cab8d1ceeceae25b8ed91758ca\": container with ID starting with eeda50a9c75e1244d10cf88d1300fc37b335b0cab8d1ceeceae25b8ed91758ca not found: ID does not exist" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.637636 4895 scope.go:117] "RemoveContainer" containerID="ec07d9d211714fe810519f87b111b9ebc115688661492029c032fba2aed09110" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.639107 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec07d9d211714fe810519f87b111b9ebc115688661492029c032fba2aed09110\": container with ID starting with ec07d9d211714fe810519f87b111b9ebc115688661492029c032fba2aed09110 not found: ID does not exist" containerID="ec07d9d211714fe810519f87b111b9ebc115688661492029c032fba2aed09110" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.639290 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec07d9d211714fe810519f87b111b9ebc115688661492029c032fba2aed09110"} err="failed to get container status \"ec07d9d211714fe810519f87b111b9ebc115688661492029c032fba2aed09110\": rpc error: code = NotFound desc = could not find container \"ec07d9d211714fe810519f87b111b9ebc115688661492029c032fba2aed09110\": container with ID starting with ec07d9d211714fe810519f87b111b9ebc115688661492029c032fba2aed09110 not found: ID does not exist" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.639359 4895 scope.go:117] "RemoveContainer" containerID="3588427658f554bd25430880d41b9d6d518bec086b2fe03d3bab1d50b0d1607a" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.665753 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" (UID: "a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.675806 4895 scope.go:117] "RemoveContainer" containerID="89a12fb719283e9a6442676644fc65f122f2b2af54832d067ae6ccd2f9abe056" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.713032 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.751888 4895 scope.go:117] "RemoveContainer" containerID="c516f925cbb38c2a0e36e4a49ede7ada25386608daca55e0968c629a2c4bbc50" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.778545 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b2r4l"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.783081 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b2r4l"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.798213 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.852067 4895 scope.go:117] "RemoveContainer" containerID="3588427658f554bd25430880d41b9d6d518bec086b2fe03d3bab1d50b0d1607a" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.852631 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3588427658f554bd25430880d41b9d6d518bec086b2fe03d3bab1d50b0d1607a\": container with ID starting with 3588427658f554bd25430880d41b9d6d518bec086b2fe03d3bab1d50b0d1607a not found: ID does not exist" containerID="3588427658f554bd25430880d41b9d6d518bec086b2fe03d3bab1d50b0d1607a" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.852663 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3588427658f554bd25430880d41b9d6d518bec086b2fe03d3bab1d50b0d1607a"} err="failed to get container status \"3588427658f554bd25430880d41b9d6d518bec086b2fe03d3bab1d50b0d1607a\": rpc error: code = NotFound desc = could not find container \"3588427658f554bd25430880d41b9d6d518bec086b2fe03d3bab1d50b0d1607a\": container with ID starting with 3588427658f554bd25430880d41b9d6d518bec086b2fe03d3bab1d50b0d1607a not found: ID does not exist" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.852687 4895 scope.go:117] "RemoveContainer" containerID="89a12fb719283e9a6442676644fc65f122f2b2af54832d067ae6ccd2f9abe056" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.853084 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a12fb719283e9a6442676644fc65f122f2b2af54832d067ae6ccd2f9abe056\": container with ID starting with 89a12fb719283e9a6442676644fc65f122f2b2af54832d067ae6ccd2f9abe056 not found: ID does not exist" containerID="89a12fb719283e9a6442676644fc65f122f2b2af54832d067ae6ccd2f9abe056" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.853105 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a12fb719283e9a6442676644fc65f122f2b2af54832d067ae6ccd2f9abe056"} err="failed to get container status \"89a12fb719283e9a6442676644fc65f122f2b2af54832d067ae6ccd2f9abe056\": rpc error: code = NotFound desc = could not find container \"89a12fb719283e9a6442676644fc65f122f2b2af54832d067ae6ccd2f9abe056\": container with ID starting with 89a12fb719283e9a6442676644fc65f122f2b2af54832d067ae6ccd2f9abe056 not found: ID does not exist" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.853121 4895 scope.go:117] "RemoveContainer" containerID="c516f925cbb38c2a0e36e4a49ede7ada25386608daca55e0968c629a2c4bbc50" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.853409 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c516f925cbb38c2a0e36e4a49ede7ada25386608daca55e0968c629a2c4bbc50\": container with ID starting with c516f925cbb38c2a0e36e4a49ede7ada25386608daca55e0968c629a2c4bbc50 not found: ID does not exist" containerID="c516f925cbb38c2a0e36e4a49ede7ada25386608daca55e0968c629a2c4bbc50" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.853427 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c516f925cbb38c2a0e36e4a49ede7ada25386608daca55e0968c629a2c4bbc50"} err="failed to get container status \"c516f925cbb38c2a0e36e4a49ede7ada25386608daca55e0968c629a2c4bbc50\": rpc error: code = NotFound desc = could not find container \"c516f925cbb38c2a0e36e4a49ede7ada25386608daca55e0968c629a2c4bbc50\": container with ID starting with c516f925cbb38c2a0e36e4a49ede7ada25386608daca55e0968c629a2c4bbc50 not found: ID does not exist" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.853440 4895 scope.go:117] "RemoveContainer" containerID="1ab528a4a5914d257629425f8b1afc78c2957ecb174b95287edf6687fe113861" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.893257 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xgjxz"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.894547 4895 scope.go:117] "RemoveContainer" containerID="a62748f3db06ade103a89e275c37447e19f3e8562aa817d784660a83698afb72" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.896415 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xgjxz"] Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.938167 4895 scope.go:117] "RemoveContainer" containerID="c36abb045c9c540b16ea23489eb2efc73c37b7f07e125ede7b98dd4bf9e7659c" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.954613 4895 scope.go:117] "RemoveContainer" containerID="1ab528a4a5914d257629425f8b1afc78c2957ecb174b95287edf6687fe113861" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.955242 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ab528a4a5914d257629425f8b1afc78c2957ecb174b95287edf6687fe113861\": container with ID starting with 1ab528a4a5914d257629425f8b1afc78c2957ecb174b95287edf6687fe113861 not found: ID does not exist" containerID="1ab528a4a5914d257629425f8b1afc78c2957ecb174b95287edf6687fe113861" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.955309 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ab528a4a5914d257629425f8b1afc78c2957ecb174b95287edf6687fe113861"} err="failed to get container status \"1ab528a4a5914d257629425f8b1afc78c2957ecb174b95287edf6687fe113861\": rpc error: code = NotFound desc = could not find container \"1ab528a4a5914d257629425f8b1afc78c2957ecb174b95287edf6687fe113861\": container with ID starting with 1ab528a4a5914d257629425f8b1afc78c2957ecb174b95287edf6687fe113861 not found: ID does not exist" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.955355 4895 scope.go:117] "RemoveContainer" containerID="a62748f3db06ade103a89e275c37447e19f3e8562aa817d784660a83698afb72" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.956209 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a62748f3db06ade103a89e275c37447e19f3e8562aa817d784660a83698afb72\": container with ID starting with a62748f3db06ade103a89e275c37447e19f3e8562aa817d784660a83698afb72 not found: ID does not exist" containerID="a62748f3db06ade103a89e275c37447e19f3e8562aa817d784660a83698afb72" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.956240 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a62748f3db06ade103a89e275c37447e19f3e8562aa817d784660a83698afb72"} err="failed to get container status \"a62748f3db06ade103a89e275c37447e19f3e8562aa817d784660a83698afb72\": rpc error: code = NotFound desc = could not find container \"a62748f3db06ade103a89e275c37447e19f3e8562aa817d784660a83698afb72\": container with ID starting with a62748f3db06ade103a89e275c37447e19f3e8562aa817d784660a83698afb72 not found: ID does not exist" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.956261 4895 scope.go:117] "RemoveContainer" containerID="c36abb045c9c540b16ea23489eb2efc73c37b7f07e125ede7b98dd4bf9e7659c" Jan 29 08:46:47 crc kubenswrapper[4895]: E0129 08:46:47.956574 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c36abb045c9c540b16ea23489eb2efc73c37b7f07e125ede7b98dd4bf9e7659c\": container with ID starting with c36abb045c9c540b16ea23489eb2efc73c37b7f07e125ede7b98dd4bf9e7659c not found: ID does not exist" containerID="c36abb045c9c540b16ea23489eb2efc73c37b7f07e125ede7b98dd4bf9e7659c" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.956604 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c36abb045c9c540b16ea23489eb2efc73c37b7f07e125ede7b98dd4bf9e7659c"} err="failed to get container status \"c36abb045c9c540b16ea23489eb2efc73c37b7f07e125ede7b98dd4bf9e7659c\": rpc error: code = NotFound desc = could not find container \"c36abb045c9c540b16ea23489eb2efc73c37b7f07e125ede7b98dd4bf9e7659c\": container with ID starting with c36abb045c9c540b16ea23489eb2efc73c37b7f07e125ede7b98dd4bf9e7659c not found: ID does not exist" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.956623 4895 scope.go:117] "RemoveContainer" containerID="656df0532adeb13cf6c9afb1413908ef0292d26089195e1975a71e8d8ff4a60d" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.975209 4895 scope.go:117] "RemoveContainer" containerID="30795ed371b20b52d2b3b58bd617c01027e48def6a203658f8f7ebff2bc2b498" Jan 29 08:46:47 crc kubenswrapper[4895]: I0129 08:46:47.994465 4895 scope.go:117] "RemoveContainer" containerID="dd4eb688794e2608d9b10ebda8769d4c36e8bbc55e1e3e10fb071808351b99fd" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.014926 4895 scope.go:117] "RemoveContainer" containerID="656df0532adeb13cf6c9afb1413908ef0292d26089195e1975a71e8d8ff4a60d" Jan 29 08:46:48 crc kubenswrapper[4895]: E0129 08:46:48.016504 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656df0532adeb13cf6c9afb1413908ef0292d26089195e1975a71e8d8ff4a60d\": container with ID starting with 656df0532adeb13cf6c9afb1413908ef0292d26089195e1975a71e8d8ff4a60d not found: ID does not exist" containerID="656df0532adeb13cf6c9afb1413908ef0292d26089195e1975a71e8d8ff4a60d" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.016554 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656df0532adeb13cf6c9afb1413908ef0292d26089195e1975a71e8d8ff4a60d"} err="failed to get container status \"656df0532adeb13cf6c9afb1413908ef0292d26089195e1975a71e8d8ff4a60d\": rpc error: code = NotFound desc = could not find container \"656df0532adeb13cf6c9afb1413908ef0292d26089195e1975a71e8d8ff4a60d\": container with ID starting with 656df0532adeb13cf6c9afb1413908ef0292d26089195e1975a71e8d8ff4a60d not found: ID does not exist" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.016597 4895 scope.go:117] "RemoveContainer" containerID="30795ed371b20b52d2b3b58bd617c01027e48def6a203658f8f7ebff2bc2b498" Jan 29 08:46:48 crc kubenswrapper[4895]: E0129 08:46:48.017258 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30795ed371b20b52d2b3b58bd617c01027e48def6a203658f8f7ebff2bc2b498\": container with ID starting with 30795ed371b20b52d2b3b58bd617c01027e48def6a203658f8f7ebff2bc2b498 not found: ID does not exist" containerID="30795ed371b20b52d2b3b58bd617c01027e48def6a203658f8f7ebff2bc2b498" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.017288 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30795ed371b20b52d2b3b58bd617c01027e48def6a203658f8f7ebff2bc2b498"} err="failed to get container status \"30795ed371b20b52d2b3b58bd617c01027e48def6a203658f8f7ebff2bc2b498\": rpc error: code = NotFound desc = could not find container \"30795ed371b20b52d2b3b58bd617c01027e48def6a203658f8f7ebff2bc2b498\": container with ID starting with 30795ed371b20b52d2b3b58bd617c01027e48def6a203658f8f7ebff2bc2b498 not found: ID does not exist" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.017310 4895 scope.go:117] "RemoveContainer" containerID="dd4eb688794e2608d9b10ebda8769d4c36e8bbc55e1e3e10fb071808351b99fd" Jan 29 08:46:48 crc kubenswrapper[4895]: E0129 08:46:48.017633 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd4eb688794e2608d9b10ebda8769d4c36e8bbc55e1e3e10fb071808351b99fd\": container with ID starting with dd4eb688794e2608d9b10ebda8769d4c36e8bbc55e1e3e10fb071808351b99fd not found: ID does not exist" containerID="dd4eb688794e2608d9b10ebda8769d4c36e8bbc55e1e3e10fb071808351b99fd" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.017689 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4eb688794e2608d9b10ebda8769d4c36e8bbc55e1e3e10fb071808351b99fd"} err="failed to get container status \"dd4eb688794e2608d9b10ebda8769d4c36e8bbc55e1e3e10fb071808351b99fd\": rpc error: code = NotFound desc = could not find container \"dd4eb688794e2608d9b10ebda8769d4c36e8bbc55e1e3e10fb071808351b99fd\": container with ID starting with dd4eb688794e2608d9b10ebda8769d4c36e8bbc55e1e3e10fb071808351b99fd not found: ID does not exist" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.017714 4895 scope.go:117] "RemoveContainer" containerID="1c9eab74cb14f86f36101dc1b05f62f97b33a04cda6ce5013d756e615e14ee74" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.036487 4895 scope.go:117] "RemoveContainer" containerID="cc9d868e940c05ebacc35e99eace45eb04a1a5f75d4d2b59d9bd84a3c1111722" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.059579 4895 scope.go:117] "RemoveContainer" containerID="1968b6153f474f79cad91112f3690e12adb3e5d39b50037f695a7187996393fb" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.081745 4895 scope.go:117] "RemoveContainer" containerID="1c9eab74cb14f86f36101dc1b05f62f97b33a04cda6ce5013d756e615e14ee74" Jan 29 08:46:48 crc kubenswrapper[4895]: E0129 08:46:48.082404 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c9eab74cb14f86f36101dc1b05f62f97b33a04cda6ce5013d756e615e14ee74\": container with ID starting with 1c9eab74cb14f86f36101dc1b05f62f97b33a04cda6ce5013d756e615e14ee74 not found: ID does not exist" containerID="1c9eab74cb14f86f36101dc1b05f62f97b33a04cda6ce5013d756e615e14ee74" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.082446 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9eab74cb14f86f36101dc1b05f62f97b33a04cda6ce5013d756e615e14ee74"} err="failed to get container status \"1c9eab74cb14f86f36101dc1b05f62f97b33a04cda6ce5013d756e615e14ee74\": rpc error: code = NotFound desc = could not find container \"1c9eab74cb14f86f36101dc1b05f62f97b33a04cda6ce5013d756e615e14ee74\": container with ID starting with 1c9eab74cb14f86f36101dc1b05f62f97b33a04cda6ce5013d756e615e14ee74 not found: ID does not exist" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.082481 4895 scope.go:117] "RemoveContainer" containerID="cc9d868e940c05ebacc35e99eace45eb04a1a5f75d4d2b59d9bd84a3c1111722" Jan 29 08:46:48 crc kubenswrapper[4895]: E0129 08:46:48.082939 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc9d868e940c05ebacc35e99eace45eb04a1a5f75d4d2b59d9bd84a3c1111722\": container with ID starting with cc9d868e940c05ebacc35e99eace45eb04a1a5f75d4d2b59d9bd84a3c1111722 not found: ID does not exist" containerID="cc9d868e940c05ebacc35e99eace45eb04a1a5f75d4d2b59d9bd84a3c1111722" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.082963 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc9d868e940c05ebacc35e99eace45eb04a1a5f75d4d2b59d9bd84a3c1111722"} err="failed to get container status \"cc9d868e940c05ebacc35e99eace45eb04a1a5f75d4d2b59d9bd84a3c1111722\": rpc error: code = NotFound desc = could not find container \"cc9d868e940c05ebacc35e99eace45eb04a1a5f75d4d2b59d9bd84a3c1111722\": container with ID starting with cc9d868e940c05ebacc35e99eace45eb04a1a5f75d4d2b59d9bd84a3c1111722 not found: ID does not exist" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.082983 4895 scope.go:117] "RemoveContainer" containerID="1968b6153f474f79cad91112f3690e12adb3e5d39b50037f695a7187996393fb" Jan 29 08:46:48 crc kubenswrapper[4895]: E0129 08:46:48.083298 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1968b6153f474f79cad91112f3690e12adb3e5d39b50037f695a7187996393fb\": container with ID starting with 1968b6153f474f79cad91112f3690e12adb3e5d39b50037f695a7187996393fb not found: ID does not exist" containerID="1968b6153f474f79cad91112f3690e12adb3e5d39b50037f695a7187996393fb" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.083326 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1968b6153f474f79cad91112f3690e12adb3e5d39b50037f695a7187996393fb"} err="failed to get container status \"1968b6153f474f79cad91112f3690e12adb3e5d39b50037f695a7187996393fb\": rpc error: code = NotFound desc = could not find container \"1968b6153f474f79cad91112f3690e12adb3e5d39b50037f695a7187996393fb\": container with ID starting with 1968b6153f474f79cad91112f3690e12adb3e5d39b50037f695a7187996393fb not found: ID does not exist" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.083345 4895 scope.go:117] "RemoveContainer" containerID="e3b43f532aebe92c13578a1ffc6b49bc0fde97c31ecbf38278971fb211e925ee" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.098282 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9"] Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.108500 4895 scope.go:117] "RemoveContainer" containerID="788c15e653a5be425c1f42b480c735de35ec163fe9c0b3fee094f0972d351e44" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.126745 4895 scope.go:117] "RemoveContainer" containerID="6ebf6543c611863f14b54c48306b76f55675f19a31fe2a9a7902e422e1d2eeca" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.157361 4895 scope.go:117] "RemoveContainer" containerID="e3b43f532aebe92c13578a1ffc6b49bc0fde97c31ecbf38278971fb211e925ee" Jan 29 08:46:48 crc kubenswrapper[4895]: E0129 08:46:48.160114 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3b43f532aebe92c13578a1ffc6b49bc0fde97c31ecbf38278971fb211e925ee\": container with ID starting with e3b43f532aebe92c13578a1ffc6b49bc0fde97c31ecbf38278971fb211e925ee not found: ID does not exist" containerID="e3b43f532aebe92c13578a1ffc6b49bc0fde97c31ecbf38278971fb211e925ee" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.160177 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3b43f532aebe92c13578a1ffc6b49bc0fde97c31ecbf38278971fb211e925ee"} err="failed to get container status \"e3b43f532aebe92c13578a1ffc6b49bc0fde97c31ecbf38278971fb211e925ee\": rpc error: code = NotFound desc = could not find container \"e3b43f532aebe92c13578a1ffc6b49bc0fde97c31ecbf38278971fb211e925ee\": container with ID starting with e3b43f532aebe92c13578a1ffc6b49bc0fde97c31ecbf38278971fb211e925ee not found: ID does not exist" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.160214 4895 scope.go:117] "RemoveContainer" containerID="788c15e653a5be425c1f42b480c735de35ec163fe9c0b3fee094f0972d351e44" Jan 29 08:46:48 crc kubenswrapper[4895]: E0129 08:46:48.160676 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"788c15e653a5be425c1f42b480c735de35ec163fe9c0b3fee094f0972d351e44\": container with ID starting with 788c15e653a5be425c1f42b480c735de35ec163fe9c0b3fee094f0972d351e44 not found: ID does not exist" containerID="788c15e653a5be425c1f42b480c735de35ec163fe9c0b3fee094f0972d351e44" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.160749 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"788c15e653a5be425c1f42b480c735de35ec163fe9c0b3fee094f0972d351e44"} err="failed to get container status \"788c15e653a5be425c1f42b480c735de35ec163fe9c0b3fee094f0972d351e44\": rpc error: code = NotFound desc = could not find container \"788c15e653a5be425c1f42b480c735de35ec163fe9c0b3fee094f0972d351e44\": container with ID starting with 788c15e653a5be425c1f42b480c735de35ec163fe9c0b3fee094f0972d351e44 not found: ID does not exist" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.160799 4895 scope.go:117] "RemoveContainer" containerID="6ebf6543c611863f14b54c48306b76f55675f19a31fe2a9a7902e422e1d2eeca" Jan 29 08:46:48 crc kubenswrapper[4895]: E0129 08:46:48.161557 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ebf6543c611863f14b54c48306b76f55675f19a31fe2a9a7902e422e1d2eeca\": container with ID starting with 6ebf6543c611863f14b54c48306b76f55675f19a31fe2a9a7902e422e1d2eeca not found: ID does not exist" containerID="6ebf6543c611863f14b54c48306b76f55675f19a31fe2a9a7902e422e1d2eeca" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.161588 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebf6543c611863f14b54c48306b76f55675f19a31fe2a9a7902e422e1d2eeca"} err="failed to get container status \"6ebf6543c611863f14b54c48306b76f55675f19a31fe2a9a7902e422e1d2eeca\": rpc error: code = NotFound desc = could not find container \"6ebf6543c611863f14b54c48306b76f55675f19a31fe2a9a7902e422e1d2eeca\": container with ID starting with 6ebf6543c611863f14b54c48306b76f55675f19a31fe2a9a7902e422e1d2eeca not found: ID does not exist" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.161609 4895 scope.go:117] "RemoveContainer" containerID="ccc2f3063f3215250c490b43ab7f2ee8d0e8a996fcc377d7ffd5e55e7d09dc26" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.185653 4895 scope.go:117] "RemoveContainer" containerID="ef42c9becf063353673a42eaaeee135b5b14ea6726043a5c03d77c11e7f2cacf" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.215327 4895 scope.go:117] "RemoveContainer" containerID="6b7a6ac32292da121904182281cbd50dbb76cfc24b8684500df8b7f74ff4e82b" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.249510 4895 scope.go:117] "RemoveContainer" containerID="ccc2f3063f3215250c490b43ab7f2ee8d0e8a996fcc377d7ffd5e55e7d09dc26" Jan 29 08:46:48 crc kubenswrapper[4895]: E0129 08:46:48.250104 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccc2f3063f3215250c490b43ab7f2ee8d0e8a996fcc377d7ffd5e55e7d09dc26\": container with ID starting with ccc2f3063f3215250c490b43ab7f2ee8d0e8a996fcc377d7ffd5e55e7d09dc26 not found: ID does not exist" containerID="ccc2f3063f3215250c490b43ab7f2ee8d0e8a996fcc377d7ffd5e55e7d09dc26" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.250135 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccc2f3063f3215250c490b43ab7f2ee8d0e8a996fcc377d7ffd5e55e7d09dc26"} err="failed to get container status \"ccc2f3063f3215250c490b43ab7f2ee8d0e8a996fcc377d7ffd5e55e7d09dc26\": rpc error: code = NotFound desc = could not find container \"ccc2f3063f3215250c490b43ab7f2ee8d0e8a996fcc377d7ffd5e55e7d09dc26\": container with ID starting with ccc2f3063f3215250c490b43ab7f2ee8d0e8a996fcc377d7ffd5e55e7d09dc26 not found: ID does not exist" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.250160 4895 scope.go:117] "RemoveContainer" containerID="ef42c9becf063353673a42eaaeee135b5b14ea6726043a5c03d77c11e7f2cacf" Jan 29 08:46:48 crc kubenswrapper[4895]: E0129 08:46:48.250527 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef42c9becf063353673a42eaaeee135b5b14ea6726043a5c03d77c11e7f2cacf\": container with ID starting with ef42c9becf063353673a42eaaeee135b5b14ea6726043a5c03d77c11e7f2cacf not found: ID does not exist" containerID="ef42c9becf063353673a42eaaeee135b5b14ea6726043a5c03d77c11e7f2cacf" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.250557 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef42c9becf063353673a42eaaeee135b5b14ea6726043a5c03d77c11e7f2cacf"} err="failed to get container status \"ef42c9becf063353673a42eaaeee135b5b14ea6726043a5c03d77c11e7f2cacf\": rpc error: code = NotFound desc = could not find container \"ef42c9becf063353673a42eaaeee135b5b14ea6726043a5c03d77c11e7f2cacf\": container with ID starting with ef42c9becf063353673a42eaaeee135b5b14ea6726043a5c03d77c11e7f2cacf not found: ID does not exist" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.250575 4895 scope.go:117] "RemoveContainer" containerID="6b7a6ac32292da121904182281cbd50dbb76cfc24b8684500df8b7f74ff4e82b" Jan 29 08:46:48 crc kubenswrapper[4895]: E0129 08:46:48.250807 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b7a6ac32292da121904182281cbd50dbb76cfc24b8684500df8b7f74ff4e82b\": container with ID starting with 6b7a6ac32292da121904182281cbd50dbb76cfc24b8684500df8b7f74ff4e82b not found: ID does not exist" containerID="6b7a6ac32292da121904182281cbd50dbb76cfc24b8684500df8b7f74ff4e82b" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.250826 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b7a6ac32292da121904182281cbd50dbb76cfc24b8684500df8b7f74ff4e82b"} err="failed to get container status \"6b7a6ac32292da121904182281cbd50dbb76cfc24b8684500df8b7f74ff4e82b\": rpc error: code = NotFound desc = could not find container \"6b7a6ac32292da121904182281cbd50dbb76cfc24b8684500df8b7f74ff4e82b\": container with ID starting with 6b7a6ac32292da121904182281cbd50dbb76cfc24b8684500df8b7f74ff4e82b not found: ID does not exist" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.250842 4895 scope.go:117] "RemoveContainer" containerID="91071f5ddb1fd06d97c8cad7580fb55c9c43db65a454a64101eb8fda2e81fdf8" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.270355 4895 scope.go:117] "RemoveContainer" containerID="3d32b1abbfba61aab5e4afb6d7b933ae7af1b19d8344190dde7ed4808d2bdf80" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.286533 4895 scope.go:117] "RemoveContainer" containerID="91071f5ddb1fd06d97c8cad7580fb55c9c43db65a454a64101eb8fda2e81fdf8" Jan 29 08:46:48 crc kubenswrapper[4895]: E0129 08:46:48.287135 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91071f5ddb1fd06d97c8cad7580fb55c9c43db65a454a64101eb8fda2e81fdf8\": container with ID starting with 91071f5ddb1fd06d97c8cad7580fb55c9c43db65a454a64101eb8fda2e81fdf8 not found: ID does not exist" containerID="91071f5ddb1fd06d97c8cad7580fb55c9c43db65a454a64101eb8fda2e81fdf8" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.287216 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91071f5ddb1fd06d97c8cad7580fb55c9c43db65a454a64101eb8fda2e81fdf8"} err="failed to get container status \"91071f5ddb1fd06d97c8cad7580fb55c9c43db65a454a64101eb8fda2e81fdf8\": rpc error: code = NotFound desc = could not find container \"91071f5ddb1fd06d97c8cad7580fb55c9c43db65a454a64101eb8fda2e81fdf8\": container with ID starting with 91071f5ddb1fd06d97c8cad7580fb55c9c43db65a454a64101eb8fda2e81fdf8 not found: ID does not exist" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.287265 4895 scope.go:117] "RemoveContainer" containerID="3d32b1abbfba61aab5e4afb6d7b933ae7af1b19d8344190dde7ed4808d2bdf80" Jan 29 08:46:48 crc kubenswrapper[4895]: E0129 08:46:48.287748 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d32b1abbfba61aab5e4afb6d7b933ae7af1b19d8344190dde7ed4808d2bdf80\": container with ID starting with 3d32b1abbfba61aab5e4afb6d7b933ae7af1b19d8344190dde7ed4808d2bdf80 not found: ID does not exist" containerID="3d32b1abbfba61aab5e4afb6d7b933ae7af1b19d8344190dde7ed4808d2bdf80" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.287792 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d32b1abbfba61aab5e4afb6d7b933ae7af1b19d8344190dde7ed4808d2bdf80"} err="failed to get container status \"3d32b1abbfba61aab5e4afb6d7b933ae7af1b19d8344190dde7ed4808d2bdf80\": rpc error: code = NotFound desc = could not find container \"3d32b1abbfba61aab5e4afb6d7b933ae7af1b19d8344190dde7ed4808d2bdf80\": container with ID starting with 3d32b1abbfba61aab5e4afb6d7b933ae7af1b19d8344190dde7ed4808d2bdf80 not found: ID does not exist" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.502851 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" event={"ID":"763fcf96-02dd-48dd-a5b0-40714be2a672","Type":"ContainerStarted","Data":"2bd6357c04f900afd1cd82b3b215403be73b67cf436d72ec9430979ba3b0faab"} Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.503275 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.503292 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" event={"ID":"763fcf96-02dd-48dd-a5b0-40714be2a672","Type":"ContainerStarted","Data":"a92c0f3fbca2f9b7bb15d21cf6b0fcec2b190e77553f815ba76d2ffa234b47f4"} Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.506853 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" event={"ID":"0f4786eb-1530-40f3-ac26-668545f5e88f","Type":"ContainerStarted","Data":"bf940457162f99a4fa5270df77010110b7c8b22b200ff5cb4abafdc48fcfc791"} Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.506969 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.506991 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" event={"ID":"0f4786eb-1530-40f3-ac26-668545f5e88f","Type":"ContainerStarted","Data":"473d051610cbf323d50a08773659320f6164e2ece8a05f7d9cd12bfbbe91014e"} Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.512198 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" event={"ID":"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86","Type":"ContainerStarted","Data":"3509206bdec6b9448e057a5edb43c79acc582a631d1606caf0f6e296a7f6322c"} Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.512315 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" event={"ID":"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86","Type":"ContainerStarted","Data":"83c984ca660aa90c728dc35687bec2ea71a2a8b4422440b3e5c7d5be12b1003f"} Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.512340 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.518392 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.520436 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.522968 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.550962 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cdbdn" podStartSLOduration=2.550939611 podStartE2EDuration="2.550939611s" podCreationTimestamp="2026-01-29 08:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:46:48.544787251 +0000 UTC m=+350.186295397" watchObservedRunningTime="2026-01-29 08:46:48.550939611 +0000 UTC m=+350.192447747" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.672559 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" podStartSLOduration=3.6725344079999998 podStartE2EDuration="3.672534408s" podCreationTimestamp="2026-01-29 08:46:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:46:48.671668614 +0000 UTC m=+350.313176760" watchObservedRunningTime="2026-01-29 08:46:48.672534408 +0000 UTC m=+350.314042554" Jan 29 08:46:48 crc kubenswrapper[4895]: I0129 08:46:48.674379 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" podStartSLOduration=3.674372799 podStartE2EDuration="3.674372799s" podCreationTimestamp="2026-01-29 08:46:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:46:48.621790443 +0000 UTC m=+350.263298619" watchObservedRunningTime="2026-01-29 08:46:48.674372799 +0000 UTC m=+350.315880945" Jan 29 08:46:49 crc kubenswrapper[4895]: I0129 08:46:49.220366 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0349d46c-bf39-4ba0-99be-22445866386b" path="/var/lib/kubelet/pods/0349d46c-bf39-4ba0-99be-22445866386b/volumes" Jan 29 08:46:49 crc kubenswrapper[4895]: I0129 08:46:49.221189 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb3534d-8971-42ec-8aaf-a970b786e631" path="/var/lib/kubelet/pods/3eb3534d-8971-42ec-8aaf-a970b786e631/volumes" Jan 29 08:46:49 crc kubenswrapper[4895]: I0129 08:46:49.221837 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="886dfa02-5b87-4bdf-9bf5-fcd914ff2afb" path="/var/lib/kubelet/pods/886dfa02-5b87-4bdf-9bf5-fcd914ff2afb/volumes" Jan 29 08:46:49 crc kubenswrapper[4895]: I0129 08:46:49.223013 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f6088a3-2691-4029-a576-2a5abcd3b107" path="/var/lib/kubelet/pods/8f6088a3-2691-4029-a576-2a5abcd3b107/volumes" Jan 29 08:46:49 crc kubenswrapper[4895]: I0129 08:46:49.223475 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b31a89-9993-4996-8b19-961efcb757ed" path="/var/lib/kubelet/pods/a0b31a89-9993-4996-8b19-961efcb757ed/volumes" Jan 29 08:46:49 crc kubenswrapper[4895]: I0129 08:46:49.225175 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" path="/var/lib/kubelet/pods/a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124/volumes" Jan 29 08:46:49 crc kubenswrapper[4895]: I0129 08:46:49.225733 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e96926-7c32-4a64-b37d-342a66d925ea" path="/var/lib/kubelet/pods/b8e96926-7c32-4a64-b37d-342a66d925ea/volumes" Jan 29 08:46:49 crc kubenswrapper[4895]: I0129 08:46:49.226349 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8" path="/var/lib/kubelet/pods/bc135b7c-ddf0-4730-93d2-6b0ea96ff5b8/volumes" Jan 29 08:46:49 crc kubenswrapper[4895]: I0129 08:46:49.227350 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c14af255-6e29-4bea-978b-8b5bf6285bd8" path="/var/lib/kubelet/pods/c14af255-6e29-4bea-978b-8b5bf6285bd8/volumes" Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.186079 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9"] Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.187171 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" podUID="43b3e4fb-51ca-4025-82ee-4bb8afcb4e86" containerName="controller-manager" containerID="cri-o://3509206bdec6b9448e057a5edb43c79acc582a631d1606caf0f6e296a7f6322c" gracePeriod=30 Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.206885 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m"] Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.207994 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" podUID="0f4786eb-1530-40f3-ac26-668545f5e88f" containerName="route-controller-manager" containerID="cri-o://bf940457162f99a4fa5270df77010110b7c8b22b200ff5cb4abafdc48fcfc791" gracePeriod=30 Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.632293 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.689460 4895 generic.go:334] "Generic (PLEG): container finished" podID="0f4786eb-1530-40f3-ac26-668545f5e88f" containerID="bf940457162f99a4fa5270df77010110b7c8b22b200ff5cb4abafdc48fcfc791" exitCode=0 Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.689640 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.690193 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" event={"ID":"0f4786eb-1530-40f3-ac26-668545f5e88f","Type":"ContainerDied","Data":"bf940457162f99a4fa5270df77010110b7c8b22b200ff5cb4abafdc48fcfc791"} Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.690279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m" event={"ID":"0f4786eb-1530-40f3-ac26-668545f5e88f","Type":"ContainerDied","Data":"473d051610cbf323d50a08773659320f6164e2ece8a05f7d9cd12bfbbe91014e"} Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.690305 4895 scope.go:117] "RemoveContainer" containerID="bf940457162f99a4fa5270df77010110b7c8b22b200ff5cb4abafdc48fcfc791" Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.695829 4895 generic.go:334] "Generic (PLEG): container finished" podID="43b3e4fb-51ca-4025-82ee-4bb8afcb4e86" containerID="3509206bdec6b9448e057a5edb43c79acc582a631d1606caf0f6e296a7f6322c" exitCode=0 Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.695896 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" event={"ID":"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86","Type":"ContainerDied","Data":"3509206bdec6b9448e057a5edb43c79acc582a631d1606caf0f6e296a7f6322c"} Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.708507 4895 scope.go:117] "RemoveContainer" containerID="bf940457162f99a4fa5270df77010110b7c8b22b200ff5cb4abafdc48fcfc791" Jan 29 08:47:14 crc kubenswrapper[4895]: E0129 08:47:14.709358 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf940457162f99a4fa5270df77010110b7c8b22b200ff5cb4abafdc48fcfc791\": container with ID starting with bf940457162f99a4fa5270df77010110b7c8b22b200ff5cb4abafdc48fcfc791 not found: ID does not exist" containerID="bf940457162f99a4fa5270df77010110b7c8b22b200ff5cb4abafdc48fcfc791" Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.709405 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf940457162f99a4fa5270df77010110b7c8b22b200ff5cb4abafdc48fcfc791"} err="failed to get container status \"bf940457162f99a4fa5270df77010110b7c8b22b200ff5cb4abafdc48fcfc791\": rpc error: code = NotFound desc = could not find container \"bf940457162f99a4fa5270df77010110b7c8b22b200ff5cb4abafdc48fcfc791\": container with ID starting with bf940457162f99a4fa5270df77010110b7c8b22b200ff5cb4abafdc48fcfc791 not found: ID does not exist" Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.798402 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f4786eb-1530-40f3-ac26-668545f5e88f-client-ca\") pod \"0f4786eb-1530-40f3-ac26-668545f5e88f\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.798510 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4786eb-1530-40f3-ac26-668545f5e88f-serving-cert\") pod \"0f4786eb-1530-40f3-ac26-668545f5e88f\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.798546 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgltg\" (UniqueName: \"kubernetes.io/projected/0f4786eb-1530-40f3-ac26-668545f5e88f-kube-api-access-kgltg\") pod \"0f4786eb-1530-40f3-ac26-668545f5e88f\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.798577 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4786eb-1530-40f3-ac26-668545f5e88f-config\") pod \"0f4786eb-1530-40f3-ac26-668545f5e88f\" (UID: \"0f4786eb-1530-40f3-ac26-668545f5e88f\") " Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.799689 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f4786eb-1530-40f3-ac26-668545f5e88f-client-ca" (OuterVolumeSpecName: "client-ca") pod "0f4786eb-1530-40f3-ac26-668545f5e88f" (UID: "0f4786eb-1530-40f3-ac26-668545f5e88f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.799875 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f4786eb-1530-40f3-ac26-668545f5e88f-config" (OuterVolumeSpecName: "config") pod "0f4786eb-1530-40f3-ac26-668545f5e88f" (UID: "0f4786eb-1530-40f3-ac26-668545f5e88f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.808269 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4786eb-1530-40f3-ac26-668545f5e88f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0f4786eb-1530-40f3-ac26-668545f5e88f" (UID: "0f4786eb-1530-40f3-ac26-668545f5e88f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.808382 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f4786eb-1530-40f3-ac26-668545f5e88f-kube-api-access-kgltg" (OuterVolumeSpecName: "kube-api-access-kgltg") pod "0f4786eb-1530-40f3-ac26-668545f5e88f" (UID: "0f4786eb-1530-40f3-ac26-668545f5e88f"). InnerVolumeSpecName "kube-api-access-kgltg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.855232 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.900177 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f4786eb-1530-40f3-ac26-668545f5e88f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.900228 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f4786eb-1530-40f3-ac26-668545f5e88f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.900237 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgltg\" (UniqueName: \"kubernetes.io/projected/0f4786eb-1530-40f3-ac26-668545f5e88f-kube-api-access-kgltg\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:14 crc kubenswrapper[4895]: I0129 08:47:14.900246 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f4786eb-1530-40f3-ac26-668545f5e88f-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.001021 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-proxy-ca-bundles\") pod \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.001114 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck78q\" (UniqueName: \"kubernetes.io/projected/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-kube-api-access-ck78q\") pod \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.001148 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-client-ca\") pod \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.001222 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-serving-cert\") pod \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.001984 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "43b3e4fb-51ca-4025-82ee-4bb8afcb4e86" (UID: "43b3e4fb-51ca-4025-82ee-4bb8afcb4e86"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.002014 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-config\") pod \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\" (UID: \"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86\") " Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.002068 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-client-ca" (OuterVolumeSpecName: "client-ca") pod "43b3e4fb-51ca-4025-82ee-4bb8afcb4e86" (UID: "43b3e4fb-51ca-4025-82ee-4bb8afcb4e86"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.002273 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.002316 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.002778 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-config" (OuterVolumeSpecName: "config") pod "43b3e4fb-51ca-4025-82ee-4bb8afcb4e86" (UID: "43b3e4fb-51ca-4025-82ee-4bb8afcb4e86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.004683 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-kube-api-access-ck78q" (OuterVolumeSpecName: "kube-api-access-ck78q") pod "43b3e4fb-51ca-4025-82ee-4bb8afcb4e86" (UID: "43b3e4fb-51ca-4025-82ee-4bb8afcb4e86"). InnerVolumeSpecName "kube-api-access-ck78q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.004938 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "43b3e4fb-51ca-4025-82ee-4bb8afcb4e86" (UID: "43b3e4fb-51ca-4025-82ee-4bb8afcb4e86"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.026048 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m"] Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.028403 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-fgj6m"] Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.103862 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.104288 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.104367 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck78q\" (UniqueName: \"kubernetes.io/projected/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86-kube-api-access-ck78q\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.218870 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f4786eb-1530-40f3-ac26-668545f5e88f" path="/var/lib/kubelet/pods/0f4786eb-1530-40f3-ac26-668545f5e88f/volumes" Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.703948 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" event={"ID":"43b3e4fb-51ca-4025-82ee-4bb8afcb4e86","Type":"ContainerDied","Data":"83c984ca660aa90c728dc35687bec2ea71a2a8b4422440b3e5c7d5be12b1003f"} Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.704009 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9" Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.704042 4895 scope.go:117] "RemoveContainer" containerID="3509206bdec6b9448e057a5edb43c79acc582a631d1606caf0f6e296a7f6322c" Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.728684 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9"] Jan 29 08:47:15 crc kubenswrapper[4895]: I0129 08:47:15.733327 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6ffd5d66b-7wxd9"] Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.020473 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.020850 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.142997 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f8d689d8f-84xv7"] Jan 29 08:47:16 crc kubenswrapper[4895]: E0129 08:47:16.143681 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" containerName="extract-content" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.143700 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" containerName="extract-content" Jan 29 08:47:16 crc kubenswrapper[4895]: E0129 08:47:16.143726 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" containerName="registry-server" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.143735 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" containerName="registry-server" Jan 29 08:47:16 crc kubenswrapper[4895]: E0129 08:47:16.143748 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b3e4fb-51ca-4025-82ee-4bb8afcb4e86" containerName="controller-manager" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.143755 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b3e4fb-51ca-4025-82ee-4bb8afcb4e86" containerName="controller-manager" Jan 29 08:47:16 crc kubenswrapper[4895]: E0129 08:47:16.143771 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4786eb-1530-40f3-ac26-668545f5e88f" containerName="route-controller-manager" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.143777 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4786eb-1530-40f3-ac26-668545f5e88f" containerName="route-controller-manager" Jan 29 08:47:16 crc kubenswrapper[4895]: E0129 08:47:16.143786 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" containerName="extract-utilities" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.143793 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" containerName="extract-utilities" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.143949 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67fb894-ca3a-4dd3-ac2f-eb91e8ea3124" containerName="registry-server" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.143966 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f4786eb-1530-40f3-ac26-668545f5e88f" containerName="route-controller-manager" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.143974 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b3e4fb-51ca-4025-82ee-4bb8afcb4e86" containerName="controller-manager" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.144475 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.146112 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd"] Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.146808 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.146969 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.147284 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.151728 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f8d689d8f-84xv7"] Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.153469 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.153419 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.154018 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.154235 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.154962 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.154694 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.155077 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.155643 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.161131 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.162320 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.194206 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd"] Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.196820 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.319622 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/189d9428-8709-4439-88ac-f9e2f133090d-serving-cert\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.319701 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d60e02-7008-4429-8f35-9dea18a780ed-config\") pod \"route-controller-manager-954f9775-m4bjd\" (UID: \"c1d60e02-7008-4429-8f35-9dea18a780ed\") " pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.319729 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1d60e02-7008-4429-8f35-9dea18a780ed-client-ca\") pod \"route-controller-manager-954f9775-m4bjd\" (UID: \"c1d60e02-7008-4429-8f35-9dea18a780ed\") " pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.319757 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52bqn\" (UniqueName: \"kubernetes.io/projected/c1d60e02-7008-4429-8f35-9dea18a780ed-kube-api-access-52bqn\") pod \"route-controller-manager-954f9775-m4bjd\" (UID: \"c1d60e02-7008-4429-8f35-9dea18a780ed\") " pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.319783 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l6hj\" (UniqueName: \"kubernetes.io/projected/189d9428-8709-4439-88ac-f9e2f133090d-kube-api-access-5l6hj\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.319811 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1d60e02-7008-4429-8f35-9dea18a780ed-serving-cert\") pod \"route-controller-manager-954f9775-m4bjd\" (UID: \"c1d60e02-7008-4429-8f35-9dea18a780ed\") " pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.320095 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/189d9428-8709-4439-88ac-f9e2f133090d-proxy-ca-bundles\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.320119 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/189d9428-8709-4439-88ac-f9e2f133090d-client-ca\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.320138 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189d9428-8709-4439-88ac-f9e2f133090d-config\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.421760 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d60e02-7008-4429-8f35-9dea18a780ed-config\") pod \"route-controller-manager-954f9775-m4bjd\" (UID: \"c1d60e02-7008-4429-8f35-9dea18a780ed\") " pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.421840 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1d60e02-7008-4429-8f35-9dea18a780ed-client-ca\") pod \"route-controller-manager-954f9775-m4bjd\" (UID: \"c1d60e02-7008-4429-8f35-9dea18a780ed\") " pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.421884 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52bqn\" (UniqueName: \"kubernetes.io/projected/c1d60e02-7008-4429-8f35-9dea18a780ed-kube-api-access-52bqn\") pod \"route-controller-manager-954f9775-m4bjd\" (UID: \"c1d60e02-7008-4429-8f35-9dea18a780ed\") " pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.421949 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l6hj\" (UniqueName: \"kubernetes.io/projected/189d9428-8709-4439-88ac-f9e2f133090d-kube-api-access-5l6hj\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.421979 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1d60e02-7008-4429-8f35-9dea18a780ed-serving-cert\") pod \"route-controller-manager-954f9775-m4bjd\" (UID: \"c1d60e02-7008-4429-8f35-9dea18a780ed\") " pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.422017 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/189d9428-8709-4439-88ac-f9e2f133090d-proxy-ca-bundles\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.422040 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/189d9428-8709-4439-88ac-f9e2f133090d-client-ca\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.422061 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189d9428-8709-4439-88ac-f9e2f133090d-config\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.422087 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/189d9428-8709-4439-88ac-f9e2f133090d-serving-cert\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.423982 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d60e02-7008-4429-8f35-9dea18a780ed-config\") pod \"route-controller-manager-954f9775-m4bjd\" (UID: \"c1d60e02-7008-4429-8f35-9dea18a780ed\") " pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.424076 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/189d9428-8709-4439-88ac-f9e2f133090d-proxy-ca-bundles\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.424333 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/189d9428-8709-4439-88ac-f9e2f133090d-client-ca\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.424592 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1d60e02-7008-4429-8f35-9dea18a780ed-client-ca\") pod \"route-controller-manager-954f9775-m4bjd\" (UID: \"c1d60e02-7008-4429-8f35-9dea18a780ed\") " pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.425503 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189d9428-8709-4439-88ac-f9e2f133090d-config\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.432555 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/189d9428-8709-4439-88ac-f9e2f133090d-serving-cert\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.441595 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1d60e02-7008-4429-8f35-9dea18a780ed-serving-cert\") pod \"route-controller-manager-954f9775-m4bjd\" (UID: \"c1d60e02-7008-4429-8f35-9dea18a780ed\") " pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.445868 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52bqn\" (UniqueName: \"kubernetes.io/projected/c1d60e02-7008-4429-8f35-9dea18a780ed-kube-api-access-52bqn\") pod \"route-controller-manager-954f9775-m4bjd\" (UID: \"c1d60e02-7008-4429-8f35-9dea18a780ed\") " pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.448727 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l6hj\" (UniqueName: \"kubernetes.io/projected/189d9428-8709-4439-88ac-f9e2f133090d-kube-api-access-5l6hj\") pod \"controller-manager-6f8d689d8f-84xv7\" (UID: \"189d9428-8709-4439-88ac-f9e2f133090d\") " pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.472650 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.525462 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:16 crc kubenswrapper[4895]: I0129 08:47:16.960252 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f8d689d8f-84xv7"] Jan 29 08:47:17 crc kubenswrapper[4895]: I0129 08:47:17.005337 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd"] Jan 29 08:47:17 crc kubenswrapper[4895]: W0129 08:47:17.011610 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1d60e02_7008_4429_8f35_9dea18a780ed.slice/crio-1773ee3ab04e920d2006a2a08e71ab64ec0d67ebf7272e38ec1131b2d7bfd671 WatchSource:0}: Error finding container 1773ee3ab04e920d2006a2a08e71ab64ec0d67ebf7272e38ec1131b2d7bfd671: Status 404 returned error can't find the container with id 1773ee3ab04e920d2006a2a08e71ab64ec0d67ebf7272e38ec1131b2d7bfd671 Jan 29 08:47:17 crc kubenswrapper[4895]: I0129 08:47:17.222137 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b3e4fb-51ca-4025-82ee-4bb8afcb4e86" path="/var/lib/kubelet/pods/43b3e4fb-51ca-4025-82ee-4bb8afcb4e86/volumes" Jan 29 08:47:17 crc kubenswrapper[4895]: I0129 08:47:17.719255 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" event={"ID":"c1d60e02-7008-4429-8f35-9dea18a780ed","Type":"ContainerStarted","Data":"2f0c504dd901de3cde048a7243d6afbdd0a6a38e59c45a8dcf81fcf4d114d94a"} Jan 29 08:47:17 crc kubenswrapper[4895]: I0129 08:47:17.719492 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" event={"ID":"c1d60e02-7008-4429-8f35-9dea18a780ed","Type":"ContainerStarted","Data":"1773ee3ab04e920d2006a2a08e71ab64ec0d67ebf7272e38ec1131b2d7bfd671"} Jan 29 08:47:17 crc kubenswrapper[4895]: I0129 08:47:17.720731 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" event={"ID":"189d9428-8709-4439-88ac-f9e2f133090d","Type":"ContainerStarted","Data":"c16a8b9fd9d3f07c617dd59b7cf3cf2d39e7392d48c0470f0b006b848d151ae3"} Jan 29 08:47:17 crc kubenswrapper[4895]: I0129 08:47:17.720792 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" event={"ID":"189d9428-8709-4439-88ac-f9e2f133090d","Type":"ContainerStarted","Data":"d93818bbdf5df6ecc8840388dad422308e4c1de1d66bc08731ac43fd589af603"} Jan 29 08:47:18 crc kubenswrapper[4895]: I0129 08:47:18.726996 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:18 crc kubenswrapper[4895]: I0129 08:47:18.728035 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:18 crc kubenswrapper[4895]: I0129 08:47:18.733438 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" Jan 29 08:47:18 crc kubenswrapper[4895]: I0129 08:47:18.734977 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" Jan 29 08:47:18 crc kubenswrapper[4895]: I0129 08:47:18.749298 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-954f9775-m4bjd" podStartSLOduration=4.749278356 podStartE2EDuration="4.749278356s" podCreationTimestamp="2026-01-29 08:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:47:18.746709655 +0000 UTC m=+380.388217801" watchObservedRunningTime="2026-01-29 08:47:18.749278356 +0000 UTC m=+380.390786512" Jan 29 08:47:18 crc kubenswrapper[4895]: I0129 08:47:18.769424 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f8d689d8f-84xv7" podStartSLOduration=4.769404974 podStartE2EDuration="4.769404974s" podCreationTimestamp="2026-01-29 08:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:47:18.768627502 +0000 UTC m=+380.410135648" watchObservedRunningTime="2026-01-29 08:47:18.769404974 +0000 UTC m=+380.410913120" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.272952 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rfq6k"] Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.273805 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.296081 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rfq6k"] Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.382153 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac25da20-5dc5-46ab-954c-522a8a1e608f-trusted-ca\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.382254 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac25da20-5dc5-46ab-954c-522a8a1e608f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.382340 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac25da20-5dc5-46ab-954c-522a8a1e608f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.382404 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac25da20-5dc5-46ab-954c-522a8a1e608f-registry-tls\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.383067 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac25da20-5dc5-46ab-954c-522a8a1e608f-bound-sa-token\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.383200 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q69v6\" (UniqueName: \"kubernetes.io/projected/ac25da20-5dc5-46ab-954c-522a8a1e608f-kube-api-access-q69v6\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.383361 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.383586 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac25da20-5dc5-46ab-954c-522a8a1e608f-registry-certificates\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.408290 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.485678 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac25da20-5dc5-46ab-954c-522a8a1e608f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.485747 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac25da20-5dc5-46ab-954c-522a8a1e608f-bound-sa-token\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.485764 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q69v6\" (UniqueName: \"kubernetes.io/projected/ac25da20-5dc5-46ab-954c-522a8a1e608f-kube-api-access-q69v6\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.485786 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac25da20-5dc5-46ab-954c-522a8a1e608f-registry-tls\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.485814 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac25da20-5dc5-46ab-954c-522a8a1e608f-registry-certificates\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.485857 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac25da20-5dc5-46ab-954c-522a8a1e608f-trusted-ca\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.485880 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac25da20-5dc5-46ab-954c-522a8a1e608f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.486449 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac25da20-5dc5-46ab-954c-522a8a1e608f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.487618 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac25da20-5dc5-46ab-954c-522a8a1e608f-trusted-ca\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.488023 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac25da20-5dc5-46ab-954c-522a8a1e608f-registry-certificates\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.495853 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac25da20-5dc5-46ab-954c-522a8a1e608f-registry-tls\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.497698 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac25da20-5dc5-46ab-954c-522a8a1e608f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.505803 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q69v6\" (UniqueName: \"kubernetes.io/projected/ac25da20-5dc5-46ab-954c-522a8a1e608f-kube-api-access-q69v6\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.506748 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac25da20-5dc5-46ab-954c-522a8a1e608f-bound-sa-token\") pod \"image-registry-66df7c8f76-rfq6k\" (UID: \"ac25da20-5dc5-46ab-954c-522a8a1e608f\") " pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:20 crc kubenswrapper[4895]: I0129 08:47:20.594809 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:21 crc kubenswrapper[4895]: I0129 08:47:21.044231 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rfq6k"] Jan 29 08:47:21 crc kubenswrapper[4895]: I0129 08:47:21.746628 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" event={"ID":"ac25da20-5dc5-46ab-954c-522a8a1e608f","Type":"ContainerStarted","Data":"67c4b9c50607d12b6779d8b4d49050477060566d8fb5aeaa932924226b562281"} Jan 29 08:47:21 crc kubenswrapper[4895]: I0129 08:47:21.746687 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" event={"ID":"ac25da20-5dc5-46ab-954c-522a8a1e608f","Type":"ContainerStarted","Data":"bd5be4a6c7cb3d89af5bbd1d797c789fa0b6ddee18d4d58ba91d0824f4e5e334"} Jan 29 08:47:21 crc kubenswrapper[4895]: I0129 08:47:21.746849 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:21 crc kubenswrapper[4895]: I0129 08:47:21.770567 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" podStartSLOduration=1.770540073 podStartE2EDuration="1.770540073s" podCreationTimestamp="2026-01-29 08:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:47:21.76752463 +0000 UTC m=+383.409032776" watchObservedRunningTime="2026-01-29 08:47:21.770540073 +0000 UTC m=+383.412048239" Jan 29 08:47:35 crc kubenswrapper[4895]: I0129 08:47:35.823770 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4jxzr"] Jan 29 08:47:35 crc kubenswrapper[4895]: I0129 08:47:35.826813 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:35 crc kubenswrapper[4895]: I0129 08:47:35.829554 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4jxzr"] Jan 29 08:47:35 crc kubenswrapper[4895]: I0129 08:47:35.830250 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 08:47:35 crc kubenswrapper[4895]: I0129 08:47:35.947215 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/399e86e5-8d5a-4663-8ce4-a919dd6f6333-utilities\") pod \"community-operators-4jxzr\" (UID: \"399e86e5-8d5a-4663-8ce4-a919dd6f6333\") " pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:35 crc kubenswrapper[4895]: I0129 08:47:35.947292 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8fnw\" (UniqueName: \"kubernetes.io/projected/399e86e5-8d5a-4663-8ce4-a919dd6f6333-kube-api-access-x8fnw\") pod \"community-operators-4jxzr\" (UID: \"399e86e5-8d5a-4663-8ce4-a919dd6f6333\") " pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:35 crc kubenswrapper[4895]: I0129 08:47:35.947359 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/399e86e5-8d5a-4663-8ce4-a919dd6f6333-catalog-content\") pod \"community-operators-4jxzr\" (UID: \"399e86e5-8d5a-4663-8ce4-a919dd6f6333\") " pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.006401 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fsrdx"] Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.007628 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.011071 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.026297 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsrdx"] Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.049953 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/399e86e5-8d5a-4663-8ce4-a919dd6f6333-utilities\") pod \"community-operators-4jxzr\" (UID: \"399e86e5-8d5a-4663-8ce4-a919dd6f6333\") " pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.050025 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8fnw\" (UniqueName: \"kubernetes.io/projected/399e86e5-8d5a-4663-8ce4-a919dd6f6333-kube-api-access-x8fnw\") pod \"community-operators-4jxzr\" (UID: \"399e86e5-8d5a-4663-8ce4-a919dd6f6333\") " pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.050057 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/399e86e5-8d5a-4663-8ce4-a919dd6f6333-catalog-content\") pod \"community-operators-4jxzr\" (UID: \"399e86e5-8d5a-4663-8ce4-a919dd6f6333\") " pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.050645 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/399e86e5-8d5a-4663-8ce4-a919dd6f6333-utilities\") pod \"community-operators-4jxzr\" (UID: \"399e86e5-8d5a-4663-8ce4-a919dd6f6333\") " pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.050712 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/399e86e5-8d5a-4663-8ce4-a919dd6f6333-catalog-content\") pod \"community-operators-4jxzr\" (UID: \"399e86e5-8d5a-4663-8ce4-a919dd6f6333\") " pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.078186 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8fnw\" (UniqueName: \"kubernetes.io/projected/399e86e5-8d5a-4663-8ce4-a919dd6f6333-kube-api-access-x8fnw\") pod \"community-operators-4jxzr\" (UID: \"399e86e5-8d5a-4663-8ce4-a919dd6f6333\") " pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.150877 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8276d7a1-1274-4d85-9243-ae6b7984ef52-utilities\") pod \"certified-operators-fsrdx\" (UID: \"8276d7a1-1274-4d85-9243-ae6b7984ef52\") " pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.151338 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khtf9\" (UniqueName: \"kubernetes.io/projected/8276d7a1-1274-4d85-9243-ae6b7984ef52-kube-api-access-khtf9\") pod \"certified-operators-fsrdx\" (UID: \"8276d7a1-1274-4d85-9243-ae6b7984ef52\") " pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.151471 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8276d7a1-1274-4d85-9243-ae6b7984ef52-catalog-content\") pod \"certified-operators-fsrdx\" (UID: \"8276d7a1-1274-4d85-9243-ae6b7984ef52\") " pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.156633 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.254768 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8276d7a1-1274-4d85-9243-ae6b7984ef52-utilities\") pod \"certified-operators-fsrdx\" (UID: \"8276d7a1-1274-4d85-9243-ae6b7984ef52\") " pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.255093 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khtf9\" (UniqueName: \"kubernetes.io/projected/8276d7a1-1274-4d85-9243-ae6b7984ef52-kube-api-access-khtf9\") pod \"certified-operators-fsrdx\" (UID: \"8276d7a1-1274-4d85-9243-ae6b7984ef52\") " pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.255116 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8276d7a1-1274-4d85-9243-ae6b7984ef52-catalog-content\") pod \"certified-operators-fsrdx\" (UID: \"8276d7a1-1274-4d85-9243-ae6b7984ef52\") " pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.255844 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8276d7a1-1274-4d85-9243-ae6b7984ef52-catalog-content\") pod \"certified-operators-fsrdx\" (UID: \"8276d7a1-1274-4d85-9243-ae6b7984ef52\") " pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.255954 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8276d7a1-1274-4d85-9243-ae6b7984ef52-utilities\") pod \"certified-operators-fsrdx\" (UID: \"8276d7a1-1274-4d85-9243-ae6b7984ef52\") " pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.278199 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khtf9\" (UniqueName: \"kubernetes.io/projected/8276d7a1-1274-4d85-9243-ae6b7984ef52-kube-api-access-khtf9\") pod \"certified-operators-fsrdx\" (UID: \"8276d7a1-1274-4d85-9243-ae6b7984ef52\") " pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.321905 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.596967 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4jxzr"] Jan 29 08:47:36 crc kubenswrapper[4895]: W0129 08:47:36.599173 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399e86e5_8d5a_4663_8ce4_a919dd6f6333.slice/crio-43cc65b7c158a0cbd3ac178b7f35ac812f241e31899d3838a9de87da3b73ac60 WatchSource:0}: Error finding container 43cc65b7c158a0cbd3ac178b7f35ac812f241e31899d3838a9de87da3b73ac60: Status 404 returned error can't find the container with id 43cc65b7c158a0cbd3ac178b7f35ac812f241e31899d3838a9de87da3b73ac60 Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.722642 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsrdx"] Jan 29 08:47:36 crc kubenswrapper[4895]: W0129 08:47:36.728279 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8276d7a1_1274_4d85_9243_ae6b7984ef52.slice/crio-00f5c1c429538175ad99bb2b4b59946046dfe4392da4220a4a60f5616d5fed78 WatchSource:0}: Error finding container 00f5c1c429538175ad99bb2b4b59946046dfe4392da4220a4a60f5616d5fed78: Status 404 returned error can't find the container with id 00f5c1c429538175ad99bb2b4b59946046dfe4392da4220a4a60f5616d5fed78 Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.837555 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrdx" event={"ID":"8276d7a1-1274-4d85-9243-ae6b7984ef52","Type":"ContainerStarted","Data":"00f5c1c429538175ad99bb2b4b59946046dfe4392da4220a4a60f5616d5fed78"} Jan 29 08:47:36 crc kubenswrapper[4895]: I0129 08:47:36.838521 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jxzr" event={"ID":"399e86e5-8d5a-4663-8ce4-a919dd6f6333","Type":"ContainerStarted","Data":"43cc65b7c158a0cbd3ac178b7f35ac812f241e31899d3838a9de87da3b73ac60"} Jan 29 08:47:37 crc kubenswrapper[4895]: I0129 08:47:37.846820 4895 generic.go:334] "Generic (PLEG): container finished" podID="8276d7a1-1274-4d85-9243-ae6b7984ef52" containerID="77f6144b25f989c5427ac4fa5a7a9c195f63c28c6c2e56461c0dbe38bb0b868f" exitCode=0 Jan 29 08:47:37 crc kubenswrapper[4895]: I0129 08:47:37.847155 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrdx" event={"ID":"8276d7a1-1274-4d85-9243-ae6b7984ef52","Type":"ContainerDied","Data":"77f6144b25f989c5427ac4fa5a7a9c195f63c28c6c2e56461c0dbe38bb0b868f"} Jan 29 08:47:37 crc kubenswrapper[4895]: I0129 08:47:37.852146 4895 generic.go:334] "Generic (PLEG): container finished" podID="399e86e5-8d5a-4663-8ce4-a919dd6f6333" containerID="65b7d1e1e9175beddbc709d02e04cca2dafb5523d66408416b054446691cdc04" exitCode=0 Jan 29 08:47:37 crc kubenswrapper[4895]: I0129 08:47:37.852198 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jxzr" event={"ID":"399e86e5-8d5a-4663-8ce4-a919dd6f6333","Type":"ContainerDied","Data":"65b7d1e1e9175beddbc709d02e04cca2dafb5523d66408416b054446691cdc04"} Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.209124 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-btnwv"] Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.210495 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.214194 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.224963 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btnwv"] Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.388390 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms9bq\" (UniqueName: \"kubernetes.io/projected/7274a4b5-8d6d-4743-b3ca-b1c3be13abbb-kube-api-access-ms9bq\") pod \"redhat-marketplace-btnwv\" (UID: \"7274a4b5-8d6d-4743-b3ca-b1c3be13abbb\") " pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.388844 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7274a4b5-8d6d-4743-b3ca-b1c3be13abbb-utilities\") pod \"redhat-marketplace-btnwv\" (UID: \"7274a4b5-8d6d-4743-b3ca-b1c3be13abbb\") " pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.389503 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7274a4b5-8d6d-4743-b3ca-b1c3be13abbb-catalog-content\") pod \"redhat-marketplace-btnwv\" (UID: \"7274a4b5-8d6d-4743-b3ca-b1c3be13abbb\") " pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.412789 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jrslr"] Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.414652 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.418251 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.423224 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrslr"] Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.491265 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7274a4b5-8d6d-4743-b3ca-b1c3be13abbb-utilities\") pod \"redhat-marketplace-btnwv\" (UID: \"7274a4b5-8d6d-4743-b3ca-b1c3be13abbb\") " pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.491331 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7274a4b5-8d6d-4743-b3ca-b1c3be13abbb-catalog-content\") pod \"redhat-marketplace-btnwv\" (UID: \"7274a4b5-8d6d-4743-b3ca-b1c3be13abbb\") " pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.491572 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms9bq\" (UniqueName: \"kubernetes.io/projected/7274a4b5-8d6d-4743-b3ca-b1c3be13abbb-kube-api-access-ms9bq\") pod \"redhat-marketplace-btnwv\" (UID: \"7274a4b5-8d6d-4743-b3ca-b1c3be13abbb\") " pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.492056 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7274a4b5-8d6d-4743-b3ca-b1c3be13abbb-utilities\") pod \"redhat-marketplace-btnwv\" (UID: \"7274a4b5-8d6d-4743-b3ca-b1c3be13abbb\") " pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.492429 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7274a4b5-8d6d-4743-b3ca-b1c3be13abbb-catalog-content\") pod \"redhat-marketplace-btnwv\" (UID: \"7274a4b5-8d6d-4743-b3ca-b1c3be13abbb\") " pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.513801 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms9bq\" (UniqueName: \"kubernetes.io/projected/7274a4b5-8d6d-4743-b3ca-b1c3be13abbb-kube-api-access-ms9bq\") pod \"redhat-marketplace-btnwv\" (UID: \"7274a4b5-8d6d-4743-b3ca-b1c3be13abbb\") " pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.548554 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.595245 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h45bz\" (UniqueName: \"kubernetes.io/projected/0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55-kube-api-access-h45bz\") pod \"redhat-operators-jrslr\" (UID: \"0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55\") " pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.595324 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55-utilities\") pod \"redhat-operators-jrslr\" (UID: \"0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55\") " pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.595541 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55-catalog-content\") pod \"redhat-operators-jrslr\" (UID: \"0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55\") " pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.698963 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h45bz\" (UniqueName: \"kubernetes.io/projected/0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55-kube-api-access-h45bz\") pod \"redhat-operators-jrslr\" (UID: \"0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55\") " pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.699463 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55-utilities\") pod \"redhat-operators-jrslr\" (UID: \"0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55\") " pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.699609 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55-catalog-content\") pod \"redhat-operators-jrslr\" (UID: \"0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55\") " pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.700494 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55-utilities\") pod \"redhat-operators-jrslr\" (UID: \"0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55\") " pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.702696 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55-catalog-content\") pod \"redhat-operators-jrslr\" (UID: \"0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55\") " pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.728086 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h45bz\" (UniqueName: \"kubernetes.io/projected/0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55-kube-api-access-h45bz\") pod \"redhat-operators-jrslr\" (UID: \"0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55\") " pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.741083 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:47:38 crc kubenswrapper[4895]: I0129 08:47:38.996238 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btnwv"] Jan 29 08:47:39 crc kubenswrapper[4895]: W0129 08:47:39.004416 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7274a4b5_8d6d_4743_b3ca_b1c3be13abbb.slice/crio-aa4f82bb461980a8a796ce223e73145fa3b8c95138881d80ca12f505e7319374 WatchSource:0}: Error finding container aa4f82bb461980a8a796ce223e73145fa3b8c95138881d80ca12f505e7319374: Status 404 returned error can't find the container with id aa4f82bb461980a8a796ce223e73145fa3b8c95138881d80ca12f505e7319374 Jan 29 08:47:39 crc kubenswrapper[4895]: I0129 08:47:39.174498 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrslr"] Jan 29 08:47:39 crc kubenswrapper[4895]: W0129 08:47:39.183741 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c6ab8b9_4fbc_40f6_9a78_bfc18d82ba55.slice/crio-77126f2bb08aadef2b8bb8c06affa7b3bc524a13bd55a392969ab7db2d91987f WatchSource:0}: Error finding container 77126f2bb08aadef2b8bb8c06affa7b3bc524a13bd55a392969ab7db2d91987f: Status 404 returned error can't find the container with id 77126f2bb08aadef2b8bb8c06affa7b3bc524a13bd55a392969ab7db2d91987f Jan 29 08:47:39 crc kubenswrapper[4895]: I0129 08:47:39.869050 4895 generic.go:334] "Generic (PLEG): container finished" podID="399e86e5-8d5a-4663-8ce4-a919dd6f6333" containerID="df1f5300f212e0d466494141533e1a00d1b025956a350348243519858c24ca2f" exitCode=0 Jan 29 08:47:39 crc kubenswrapper[4895]: I0129 08:47:39.869184 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jxzr" event={"ID":"399e86e5-8d5a-4663-8ce4-a919dd6f6333","Type":"ContainerDied","Data":"df1f5300f212e0d466494141533e1a00d1b025956a350348243519858c24ca2f"} Jan 29 08:47:39 crc kubenswrapper[4895]: I0129 08:47:39.879695 4895 generic.go:334] "Generic (PLEG): container finished" podID="7274a4b5-8d6d-4743-b3ca-b1c3be13abbb" containerID="c80db7c6ba3121e75893c389b0c44d56ae83fea7c6a33e2205dd3b9a53230075" exitCode=0 Jan 29 08:47:39 crc kubenswrapper[4895]: I0129 08:47:39.879767 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btnwv" event={"ID":"7274a4b5-8d6d-4743-b3ca-b1c3be13abbb","Type":"ContainerDied","Data":"c80db7c6ba3121e75893c389b0c44d56ae83fea7c6a33e2205dd3b9a53230075"} Jan 29 08:47:39 crc kubenswrapper[4895]: I0129 08:47:39.879839 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btnwv" event={"ID":"7274a4b5-8d6d-4743-b3ca-b1c3be13abbb","Type":"ContainerStarted","Data":"aa4f82bb461980a8a796ce223e73145fa3b8c95138881d80ca12f505e7319374"} Jan 29 08:47:39 crc kubenswrapper[4895]: I0129 08:47:39.881755 4895 generic.go:334] "Generic (PLEG): container finished" podID="8276d7a1-1274-4d85-9243-ae6b7984ef52" containerID="a706e9f52d185411d1d1f213fbe62b1e30f83a45f913896eb8a2f3ac1a63349a" exitCode=0 Jan 29 08:47:39 crc kubenswrapper[4895]: I0129 08:47:39.881849 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrdx" event={"ID":"8276d7a1-1274-4d85-9243-ae6b7984ef52","Type":"ContainerDied","Data":"a706e9f52d185411d1d1f213fbe62b1e30f83a45f913896eb8a2f3ac1a63349a"} Jan 29 08:47:39 crc kubenswrapper[4895]: I0129 08:47:39.884614 4895 generic.go:334] "Generic (PLEG): container finished" podID="0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55" containerID="c84feb6fcad050e60a92fb2ddfe60001451f72125dd097814c9c6557c5c09177" exitCode=0 Jan 29 08:47:39 crc kubenswrapper[4895]: I0129 08:47:39.884652 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrslr" event={"ID":"0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55","Type":"ContainerDied","Data":"c84feb6fcad050e60a92fb2ddfe60001451f72125dd097814c9c6557c5c09177"} Jan 29 08:47:39 crc kubenswrapper[4895]: I0129 08:47:39.884680 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrslr" event={"ID":"0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55","Type":"ContainerStarted","Data":"77126f2bb08aadef2b8bb8c06affa7b3bc524a13bd55a392969ab7db2d91987f"} Jan 29 08:47:40 crc kubenswrapper[4895]: I0129 08:47:40.605224 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-rfq6k" Jan 29 08:47:40 crc kubenswrapper[4895]: I0129 08:47:40.668982 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bt8sz"] Jan 29 08:47:42 crc kubenswrapper[4895]: I0129 08:47:42.917737 4895 generic.go:334] "Generic (PLEG): container finished" podID="7274a4b5-8d6d-4743-b3ca-b1c3be13abbb" containerID="0065e314ddae85dec9c5ea10033c7267255a381230f216b6d841e9b8b4704349" exitCode=0 Jan 29 08:47:42 crc kubenswrapper[4895]: I0129 08:47:42.917894 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btnwv" event={"ID":"7274a4b5-8d6d-4743-b3ca-b1c3be13abbb","Type":"ContainerDied","Data":"0065e314ddae85dec9c5ea10033c7267255a381230f216b6d841e9b8b4704349"} Jan 29 08:47:42 crc kubenswrapper[4895]: I0129 08:47:42.923065 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrdx" event={"ID":"8276d7a1-1274-4d85-9243-ae6b7984ef52","Type":"ContainerStarted","Data":"fba3f6acd65716218a6170668201592e8e4cc9782d7b2215094a833664943bd6"} Jan 29 08:47:42 crc kubenswrapper[4895]: I0129 08:47:42.925010 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrslr" event={"ID":"0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55","Type":"ContainerStarted","Data":"5731963321a29f6298b63220444d594079a7a703d3938f5672ef18ef15235755"} Jan 29 08:47:42 crc kubenswrapper[4895]: I0129 08:47:42.930153 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jxzr" event={"ID":"399e86e5-8d5a-4663-8ce4-a919dd6f6333","Type":"ContainerStarted","Data":"a885faf0c365a3b6df8c2cdd2fafe5bb05485dfa770d8814324065dae07be6a7"} Jan 29 08:47:42 crc kubenswrapper[4895]: I0129 08:47:42.997209 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fsrdx" podStartSLOduration=5.078399941 podStartE2EDuration="7.997176742s" podCreationTimestamp="2026-01-29 08:47:35 +0000 UTC" firstStartedPulling="2026-01-29 08:47:37.849066576 +0000 UTC m=+399.490574742" lastFinishedPulling="2026-01-29 08:47:40.767843397 +0000 UTC m=+402.409351543" observedRunningTime="2026-01-29 08:47:42.993414989 +0000 UTC m=+404.634923135" watchObservedRunningTime="2026-01-29 08:47:42.997176742 +0000 UTC m=+404.638684888" Jan 29 08:47:43 crc kubenswrapper[4895]: I0129 08:47:43.018735 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4jxzr" podStartSLOduration=3.817445992 podStartE2EDuration="8.018711837s" podCreationTimestamp="2026-01-29 08:47:35 +0000 UTC" firstStartedPulling="2026-01-29 08:47:37.853800676 +0000 UTC m=+399.495308822" lastFinishedPulling="2026-01-29 08:47:42.055066521 +0000 UTC m=+403.696574667" observedRunningTime="2026-01-29 08:47:43.015701054 +0000 UTC m=+404.657209200" watchObservedRunningTime="2026-01-29 08:47:43.018711837 +0000 UTC m=+404.660219983" Jan 29 08:47:43 crc kubenswrapper[4895]: I0129 08:47:43.941327 4895 generic.go:334] "Generic (PLEG): container finished" podID="0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55" containerID="5731963321a29f6298b63220444d594079a7a703d3938f5672ef18ef15235755" exitCode=0 Jan 29 08:47:43 crc kubenswrapper[4895]: I0129 08:47:43.941463 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrslr" event={"ID":"0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55","Type":"ContainerDied","Data":"5731963321a29f6298b63220444d594079a7a703d3938f5672ef18ef15235755"} Jan 29 08:47:43 crc kubenswrapper[4895]: I0129 08:47:43.949867 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btnwv" event={"ID":"7274a4b5-8d6d-4743-b3ca-b1c3be13abbb","Type":"ContainerStarted","Data":"540cc29c01374359de74cda423f276b72ccb0fdd050188b510ba05ff97bc5394"} Jan 29 08:47:44 crc kubenswrapper[4895]: I0129 08:47:44.001049 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-btnwv" podStartSLOduration=2.43566827 podStartE2EDuration="6.001022885s" podCreationTimestamp="2026-01-29 08:47:38 +0000 UTC" firstStartedPulling="2026-01-29 08:47:39.885326019 +0000 UTC m=+401.526834155" lastFinishedPulling="2026-01-29 08:47:43.450680624 +0000 UTC m=+405.092188770" observedRunningTime="2026-01-29 08:47:43.996222423 +0000 UTC m=+405.637730579" watchObservedRunningTime="2026-01-29 08:47:44.001022885 +0000 UTC m=+405.642531031" Jan 29 08:47:45 crc kubenswrapper[4895]: I0129 08:47:45.965341 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrslr" event={"ID":"0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55","Type":"ContainerStarted","Data":"7650eec18d33d669e32ecd148914b030bb675dd5307726be7762ab3e88353b3b"} Jan 29 08:47:45 crc kubenswrapper[4895]: I0129 08:47:45.989469 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jrslr" podStartSLOduration=3.200595225 podStartE2EDuration="7.989443709s" podCreationTimestamp="2026-01-29 08:47:38 +0000 UTC" firstStartedPulling="2026-01-29 08:47:39.887488438 +0000 UTC m=+401.528996584" lastFinishedPulling="2026-01-29 08:47:44.676336922 +0000 UTC m=+406.317845068" observedRunningTime="2026-01-29 08:47:45.984102823 +0000 UTC m=+407.625610979" watchObservedRunningTime="2026-01-29 08:47:45.989443709 +0000 UTC m=+407.630951855" Jan 29 08:47:46 crc kubenswrapper[4895]: I0129 08:47:46.021133 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:47:46 crc kubenswrapper[4895]: I0129 08:47:46.021225 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:47:46 crc kubenswrapper[4895]: I0129 08:47:46.157597 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:46 crc kubenswrapper[4895]: I0129 08:47:46.157659 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:46 crc kubenswrapper[4895]: I0129 08:47:46.219100 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:46 crc kubenswrapper[4895]: I0129 08:47:46.322284 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:46 crc kubenswrapper[4895]: I0129 08:47:46.322361 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:46 crc kubenswrapper[4895]: I0129 08:47:46.362342 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:48 crc kubenswrapper[4895]: I0129 08:47:48.548810 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:48 crc kubenswrapper[4895]: I0129 08:47:48.550001 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:48 crc kubenswrapper[4895]: I0129 08:47:48.599559 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:48 crc kubenswrapper[4895]: I0129 08:47:48.742242 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:47:48 crc kubenswrapper[4895]: I0129 08:47:48.742404 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:47:49 crc kubenswrapper[4895]: I0129 08:47:49.025980 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-btnwv" Jan 29 08:47:49 crc kubenswrapper[4895]: I0129 08:47:49.782680 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jrslr" podUID="0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55" containerName="registry-server" probeResult="failure" output=< Jan 29 08:47:49 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 08:47:49 crc kubenswrapper[4895]: > Jan 29 08:47:56 crc kubenswrapper[4895]: I0129 08:47:56.202095 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4jxzr" Jan 29 08:47:56 crc kubenswrapper[4895]: I0129 08:47:56.366566 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fsrdx" Jan 29 08:47:58 crc kubenswrapper[4895]: I0129 08:47:58.796437 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:47:58 crc kubenswrapper[4895]: I0129 08:47:58.841354 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jrslr" Jan 29 08:48:05 crc kubenswrapper[4895]: I0129 08:48:05.725105 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" podUID="766285e2-63c4-4073-9b24-d5fbf4b26638" containerName="registry" containerID="cri-o://9207aefda22773d043b889d245251c7bd738e5e93d3725c8ce56f6817e2828aa" gracePeriod=30 Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.098750 4895 generic.go:334] "Generic (PLEG): container finished" podID="766285e2-63c4-4073-9b24-d5fbf4b26638" containerID="9207aefda22773d043b889d245251c7bd738e5e93d3725c8ce56f6817e2828aa" exitCode=0 Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.098865 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" event={"ID":"766285e2-63c4-4073-9b24-d5fbf4b26638","Type":"ContainerDied","Data":"9207aefda22773d043b889d245251c7bd738e5e93d3725c8ce56f6817e2828aa"} Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.156381 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.328605 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/766285e2-63c4-4073-9b24-d5fbf4b26638-installation-pull-secrets\") pod \"766285e2-63c4-4073-9b24-d5fbf4b26638\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.328705 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/766285e2-63c4-4073-9b24-d5fbf4b26638-ca-trust-extracted\") pod \"766285e2-63c4-4073-9b24-d5fbf4b26638\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.328762 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/766285e2-63c4-4073-9b24-d5fbf4b26638-registry-certificates\") pod \"766285e2-63c4-4073-9b24-d5fbf4b26638\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.328943 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"766285e2-63c4-4073-9b24-d5fbf4b26638\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.328970 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/766285e2-63c4-4073-9b24-d5fbf4b26638-trusted-ca\") pod \"766285e2-63c4-4073-9b24-d5fbf4b26638\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.329007 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-registry-tls\") pod \"766285e2-63c4-4073-9b24-d5fbf4b26638\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.329091 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbxtd\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-kube-api-access-gbxtd\") pod \"766285e2-63c4-4073-9b24-d5fbf4b26638\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.329141 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-bound-sa-token\") pod \"766285e2-63c4-4073-9b24-d5fbf4b26638\" (UID: \"766285e2-63c4-4073-9b24-d5fbf4b26638\") " Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.333068 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/766285e2-63c4-4073-9b24-d5fbf4b26638-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "766285e2-63c4-4073-9b24-d5fbf4b26638" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.333072 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/766285e2-63c4-4073-9b24-d5fbf4b26638-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "766285e2-63c4-4073-9b24-d5fbf4b26638" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.337778 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "766285e2-63c4-4073-9b24-d5fbf4b26638" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.338467 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "766285e2-63c4-4073-9b24-d5fbf4b26638" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.338888 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-kube-api-access-gbxtd" (OuterVolumeSpecName: "kube-api-access-gbxtd") pod "766285e2-63c4-4073-9b24-d5fbf4b26638" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638"). InnerVolumeSpecName "kube-api-access-gbxtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.347566 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/766285e2-63c4-4073-9b24-d5fbf4b26638-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "766285e2-63c4-4073-9b24-d5fbf4b26638" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.349472 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/766285e2-63c4-4073-9b24-d5fbf4b26638-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "766285e2-63c4-4073-9b24-d5fbf4b26638" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.385228 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "766285e2-63c4-4073-9b24-d5fbf4b26638" (UID: "766285e2-63c4-4073-9b24-d5fbf4b26638"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.430429 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/766285e2-63c4-4073-9b24-d5fbf4b26638-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.430479 4895 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.430495 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbxtd\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-kube-api-access-gbxtd\") on node \"crc\" DevicePath \"\"" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.430515 4895 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/766285e2-63c4-4073-9b24-d5fbf4b26638-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.430529 4895 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/766285e2-63c4-4073-9b24-d5fbf4b26638-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.430542 4895 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/766285e2-63c4-4073-9b24-d5fbf4b26638-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 08:48:06 crc kubenswrapper[4895]: I0129 08:48:06.430558 4895 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/766285e2-63c4-4073-9b24-d5fbf4b26638-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 08:48:07 crc kubenswrapper[4895]: I0129 08:48:07.106483 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" event={"ID":"766285e2-63c4-4073-9b24-d5fbf4b26638","Type":"ContainerDied","Data":"06b3c269250f81fd7edb9dc4f1e4116846e2b526301631ce54c717d3b09bd1dc"} Jan 29 08:48:07 crc kubenswrapper[4895]: I0129 08:48:07.106608 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bt8sz" Jan 29 08:48:07 crc kubenswrapper[4895]: I0129 08:48:07.108085 4895 scope.go:117] "RemoveContainer" containerID="9207aefda22773d043b889d245251c7bd738e5e93d3725c8ce56f6817e2828aa" Jan 29 08:48:07 crc kubenswrapper[4895]: I0129 08:48:07.144117 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bt8sz"] Jan 29 08:48:07 crc kubenswrapper[4895]: I0129 08:48:07.151145 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bt8sz"] Jan 29 08:48:07 crc kubenswrapper[4895]: I0129 08:48:07.220834 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="766285e2-63c4-4073-9b24-d5fbf4b26638" path="/var/lib/kubelet/pods/766285e2-63c4-4073-9b24-d5fbf4b26638/volumes" Jan 29 08:48:16 crc kubenswrapper[4895]: I0129 08:48:16.020973 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:48:16 crc kubenswrapper[4895]: I0129 08:48:16.021997 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:48:16 crc kubenswrapper[4895]: I0129 08:48:16.022080 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:48:16 crc kubenswrapper[4895]: I0129 08:48:16.024020 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9f292e90b87004fc0704882047c33934c7850e8be2171a5825e64c3cef92531"} pod="openshift-machine-config-operator/machine-config-daemon-z82hk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:48:16 crc kubenswrapper[4895]: I0129 08:48:16.024132 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" containerID="cri-o://f9f292e90b87004fc0704882047c33934c7850e8be2171a5825e64c3cef92531" gracePeriod=600 Jan 29 08:48:16 crc kubenswrapper[4895]: I0129 08:48:16.169039 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerID="f9f292e90b87004fc0704882047c33934c7850e8be2171a5825e64c3cef92531" exitCode=0 Jan 29 08:48:16 crc kubenswrapper[4895]: I0129 08:48:16.169103 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerDied","Data":"f9f292e90b87004fc0704882047c33934c7850e8be2171a5825e64c3cef92531"} Jan 29 08:48:16 crc kubenswrapper[4895]: I0129 08:48:16.169154 4895 scope.go:117] "RemoveContainer" containerID="fbf0151d00f2c67c4800b5b640d06a97327efbc367b354968c823444020a6bd3" Jan 29 08:48:17 crc kubenswrapper[4895]: I0129 08:48:17.178822 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerStarted","Data":"aa1a317827baf23906951927e8f8b8c0dda6533f94b12af3d4987c30b71139e1"} Jan 29 08:50:16 crc kubenswrapper[4895]: I0129 08:50:16.021226 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:50:16 crc kubenswrapper[4895]: I0129 08:50:16.021884 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:50:46 crc kubenswrapper[4895]: I0129 08:50:46.020417 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:50:46 crc kubenswrapper[4895]: I0129 08:50:46.022349 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:51:16 crc kubenswrapper[4895]: I0129 08:51:16.021118 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:51:16 crc kubenswrapper[4895]: I0129 08:51:16.022152 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:51:16 crc kubenswrapper[4895]: I0129 08:51:16.022253 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:51:16 crc kubenswrapper[4895]: I0129 08:51:16.023241 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa1a317827baf23906951927e8f8b8c0dda6533f94b12af3d4987c30b71139e1"} pod="openshift-machine-config-operator/machine-config-daemon-z82hk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:51:16 crc kubenswrapper[4895]: I0129 08:51:16.023361 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" containerID="cri-o://aa1a317827baf23906951927e8f8b8c0dda6533f94b12af3d4987c30b71139e1" gracePeriod=600 Jan 29 08:51:16 crc kubenswrapper[4895]: I0129 08:51:16.359933 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerID="aa1a317827baf23906951927e8f8b8c0dda6533f94b12af3d4987c30b71139e1" exitCode=0 Jan 29 08:51:16 crc kubenswrapper[4895]: I0129 08:51:16.360073 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerDied","Data":"aa1a317827baf23906951927e8f8b8c0dda6533f94b12af3d4987c30b71139e1"} Jan 29 08:51:16 crc kubenswrapper[4895]: I0129 08:51:16.360501 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerStarted","Data":"196dd09f37b20983a231714c51e3920c9238c0dcfbe938ccc9dfef7054a9c34d"} Jan 29 08:51:16 crc kubenswrapper[4895]: I0129 08:51:16.360542 4895 scope.go:117] "RemoveContainer" containerID="f9f292e90b87004fc0704882047c33934c7850e8be2171a5825e64c3cef92531" Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.884904 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rwbcg"] Jan 29 08:52:54 crc kubenswrapper[4895]: E0129 08:52:54.886042 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766285e2-63c4-4073-9b24-d5fbf4b26638" containerName="registry" Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.886061 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="766285e2-63c4-4073-9b24-d5fbf4b26638" containerName="registry" Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.886189 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="766285e2-63c4-4073-9b24-d5fbf4b26638" containerName="registry" Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.886700 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rwbcg" Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.889522 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.889954 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.890137 4895 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-v4sj9" Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.890520 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-lmpk5"] Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.891239 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-lmpk5" Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.896201 4895 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zkxk9" Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.912350 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-lmpk5"] Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.924284 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ttlgx"] Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.925219 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-ttlgx" Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.927855 4895 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-tpb4f" Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.939290 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ttlgx"] Jan 29 08:52:54 crc kubenswrapper[4895]: I0129 08:52:54.944111 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rwbcg"] Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.008838 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b77r\" (UniqueName: \"kubernetes.io/projected/6b07c0c4-eb39-4313-b842-9a36bd400bae-kube-api-access-7b77r\") pod \"cert-manager-858654f9db-lmpk5\" (UID: \"6b07c0c4-eb39-4313-b842-9a36bd400bae\") " pod="cert-manager/cert-manager-858654f9db-lmpk5" Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.008887 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll4vt\" (UniqueName: \"kubernetes.io/projected/754acefa-2366-4c3a-97be-e4a941d8066b-kube-api-access-ll4vt\") pod \"cert-manager-cainjector-cf98fcc89-rwbcg\" (UID: \"754acefa-2366-4c3a-97be-e4a941d8066b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rwbcg" Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.110260 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2wfs\" (UniqueName: \"kubernetes.io/projected/0e41817c-460a-4a92-9220-10fde5db690b-kube-api-access-t2wfs\") pod \"cert-manager-webhook-687f57d79b-ttlgx\" (UID: \"0e41817c-460a-4a92-9220-10fde5db690b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ttlgx" Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.110411 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b77r\" (UniqueName: \"kubernetes.io/projected/6b07c0c4-eb39-4313-b842-9a36bd400bae-kube-api-access-7b77r\") pod \"cert-manager-858654f9db-lmpk5\" (UID: \"6b07c0c4-eb39-4313-b842-9a36bd400bae\") " pod="cert-manager/cert-manager-858654f9db-lmpk5" Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.110461 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll4vt\" (UniqueName: \"kubernetes.io/projected/754acefa-2366-4c3a-97be-e4a941d8066b-kube-api-access-ll4vt\") pod \"cert-manager-cainjector-cf98fcc89-rwbcg\" (UID: \"754acefa-2366-4c3a-97be-e4a941d8066b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rwbcg" Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.130691 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll4vt\" (UniqueName: \"kubernetes.io/projected/754acefa-2366-4c3a-97be-e4a941d8066b-kube-api-access-ll4vt\") pod \"cert-manager-cainjector-cf98fcc89-rwbcg\" (UID: \"754acefa-2366-4c3a-97be-e4a941d8066b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rwbcg" Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.138975 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b77r\" (UniqueName: \"kubernetes.io/projected/6b07c0c4-eb39-4313-b842-9a36bd400bae-kube-api-access-7b77r\") pod \"cert-manager-858654f9db-lmpk5\" (UID: \"6b07c0c4-eb39-4313-b842-9a36bd400bae\") " pod="cert-manager/cert-manager-858654f9db-lmpk5" Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.208696 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rwbcg" Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.218174 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2wfs\" (UniqueName: \"kubernetes.io/projected/0e41817c-460a-4a92-9220-10fde5db690b-kube-api-access-t2wfs\") pod \"cert-manager-webhook-687f57d79b-ttlgx\" (UID: \"0e41817c-460a-4a92-9220-10fde5db690b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ttlgx" Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.225651 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-lmpk5" Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.245588 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2wfs\" (UniqueName: \"kubernetes.io/projected/0e41817c-460a-4a92-9220-10fde5db690b-kube-api-access-t2wfs\") pod \"cert-manager-webhook-687f57d79b-ttlgx\" (UID: \"0e41817c-460a-4a92-9220-10fde5db690b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ttlgx" Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.442951 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rwbcg"] Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.451655 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.501710 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-lmpk5"] Jan 29 08:52:55 crc kubenswrapper[4895]: W0129 08:52:55.507234 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b07c0c4_eb39_4313_b842_9a36bd400bae.slice/crio-cd02acb344e7f300b0eb2faaf83da9d0d31936055d8525ef1ab9ce786c993eec WatchSource:0}: Error finding container cd02acb344e7f300b0eb2faaf83da9d0d31936055d8525ef1ab9ce786c993eec: Status 404 returned error can't find the container with id cd02acb344e7f300b0eb2faaf83da9d0d31936055d8525ef1ab9ce786c993eec Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.538099 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-ttlgx" Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.722325 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ttlgx"] Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.991112 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rwbcg" event={"ID":"754acefa-2366-4c3a-97be-e4a941d8066b","Type":"ContainerStarted","Data":"2a5c1df54a93a60b501a5d9f00d3ecfdf291b2959f1bffb5ed733ad9ac20124a"} Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.993037 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-ttlgx" event={"ID":"0e41817c-460a-4a92-9220-10fde5db690b","Type":"ContainerStarted","Data":"98383d2d9a377e0dbe1d18d84db99b105fb45f93979077c46cbcc6d1f08b6c25"} Jan 29 08:52:55 crc kubenswrapper[4895]: I0129 08:52:55.994179 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-lmpk5" event={"ID":"6b07c0c4-eb39-4313-b842-9a36bd400bae","Type":"ContainerStarted","Data":"cd02acb344e7f300b0eb2faaf83da9d0d31936055d8525ef1ab9ce786c993eec"} Jan 29 08:53:02 crc kubenswrapper[4895]: I0129 08:53:02.040201 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rwbcg" event={"ID":"754acefa-2366-4c3a-97be-e4a941d8066b","Type":"ContainerStarted","Data":"7a5c9490f47a46bd4625aa83ac37891125e1cac0baa9a3109682b0c176b9979a"} Jan 29 08:53:02 crc kubenswrapper[4895]: I0129 08:53:02.042960 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-ttlgx" event={"ID":"0e41817c-460a-4a92-9220-10fde5db690b","Type":"ContainerStarted","Data":"4e90f7811ef38b533d8c585b329892c5e9cfe88a20d391c1b129463518350378"} Jan 29 08:53:02 crc kubenswrapper[4895]: I0129 08:53:02.043143 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-ttlgx" Jan 29 08:53:02 crc kubenswrapper[4895]: I0129 08:53:02.044629 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-lmpk5" event={"ID":"6b07c0c4-eb39-4313-b842-9a36bd400bae","Type":"ContainerStarted","Data":"3e09b263f0b3672b76e250bcdf9b7f11f840d0f4507a98e416ad7eaff7adf4bd"} Jan 29 08:53:02 crc kubenswrapper[4895]: I0129 08:53:02.058827 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rwbcg" podStartSLOduration=2.09902195 podStartE2EDuration="8.058798139s" podCreationTimestamp="2026-01-29 08:52:54 +0000 UTC" firstStartedPulling="2026-01-29 08:52:55.451332924 +0000 UTC m=+717.092841070" lastFinishedPulling="2026-01-29 08:53:01.411109113 +0000 UTC m=+723.052617259" observedRunningTime="2026-01-29 08:53:02.055766289 +0000 UTC m=+723.697274445" watchObservedRunningTime="2026-01-29 08:53:02.058798139 +0000 UTC m=+723.700306275" Jan 29 08:53:02 crc kubenswrapper[4895]: I0129 08:53:02.122261 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-lmpk5" podStartSLOduration=2.279054232 podStartE2EDuration="8.122239637s" podCreationTimestamp="2026-01-29 08:52:54 +0000 UTC" firstStartedPulling="2026-01-29 08:52:55.51016474 +0000 UTC m=+717.151672886" lastFinishedPulling="2026-01-29 08:53:01.353350145 +0000 UTC m=+722.994858291" observedRunningTime="2026-01-29 08:53:02.12156407 +0000 UTC m=+723.763072226" watchObservedRunningTime="2026-01-29 08:53:02.122239637 +0000 UTC m=+723.763747783" Jan 29 08:53:02 crc kubenswrapper[4895]: I0129 08:53:02.143343 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-ttlgx" podStartSLOduration=2.400763502 podStartE2EDuration="8.1433243s" podCreationTimestamp="2026-01-29 08:52:54 +0000 UTC" firstStartedPulling="2026-01-29 08:52:55.730293258 +0000 UTC m=+717.371801414" lastFinishedPulling="2026-01-29 08:53:01.472854026 +0000 UTC m=+723.114362212" observedRunningTime="2026-01-29 08:53:02.141466449 +0000 UTC m=+723.782974605" watchObservedRunningTime="2026-01-29 08:53:02.1433243 +0000 UTC m=+723.784832446" Jan 29 08:53:03 crc kubenswrapper[4895]: I0129 08:53:03.992877 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4zc4"] Jan 29 08:53:03 crc kubenswrapper[4895]: I0129 08:53:03.993355 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovn-controller" containerID="cri-o://51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1" gracePeriod=30 Jan 29 08:53:03 crc kubenswrapper[4895]: I0129 08:53:03.993853 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="sbdb" containerID="cri-o://896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659" gracePeriod=30 Jan 29 08:53:03 crc kubenswrapper[4895]: I0129 08:53:03.993969 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="nbdb" containerID="cri-o://0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d" gracePeriod=30 Jan 29 08:53:03 crc kubenswrapper[4895]: I0129 08:53:03.994038 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovn-acl-logging" containerID="cri-o://bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034" gracePeriod=30 Jan 29 08:53:03 crc kubenswrapper[4895]: I0129 08:53:03.994105 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="kube-rbac-proxy-node" containerID="cri-o://43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785" gracePeriod=30 Jan 29 08:53:03 crc kubenswrapper[4895]: I0129 08:53:03.994296 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="northd" containerID="cri-o://d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef" gracePeriod=30 Jan 29 08:53:03 crc kubenswrapper[4895]: I0129 08:53:03.994367 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a" gracePeriod=30 Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.077237 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" containerID="cri-o://fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80" gracePeriod=30 Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.894074 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/3.log" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.896993 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovn-acl-logging/0.log" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.897727 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovn-controller/0.log" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.900810 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.961719 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mc64l"] Jan 29 08:53:04 crc kubenswrapper[4895]: E0129 08:53:04.962150 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="northd" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962187 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="northd" Jan 29 08:53:04 crc kubenswrapper[4895]: E0129 08:53:04.962209 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962222 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: E0129 08:53:04.962233 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="nbdb" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962244 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="nbdb" Jan 29 08:53:04 crc kubenswrapper[4895]: E0129 08:53:04.962261 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962273 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: E0129 08:53:04.962288 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962298 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: E0129 08:53:04.962312 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962323 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 08:53:04 crc kubenswrapper[4895]: E0129 08:53:04.962340 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="kube-rbac-proxy-node" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962351 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="kube-rbac-proxy-node" Jan 29 08:53:04 crc kubenswrapper[4895]: E0129 08:53:04.962363 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="kubecfg-setup" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962373 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="kubecfg-setup" Jan 29 08:53:04 crc kubenswrapper[4895]: E0129 08:53:04.962395 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962405 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: E0129 08:53:04.962418 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovn-acl-logging" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962428 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovn-acl-logging" Jan 29 08:53:04 crc kubenswrapper[4895]: E0129 08:53:04.962442 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="sbdb" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962452 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="sbdb" Jan 29 08:53:04 crc kubenswrapper[4895]: E0129 08:53:04.962469 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovn-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962480 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovn-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962639 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962660 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovn-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962679 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="nbdb" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962692 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovn-acl-logging" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962712 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962727 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962741 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="kube-rbac-proxy-node" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962754 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962766 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="northd" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962778 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.962793 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="sbdb" Jan 29 08:53:04 crc kubenswrapper[4895]: E0129 08:53:04.963035 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.963052 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.963196 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerName="ovnkube-controller" Jan 29 08:53:04 crc kubenswrapper[4895]: I0129 08:53:04.966054 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.059007 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-cni-netd\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.059082 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-systemd\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.059160 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovn-node-metrics-cert\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.059201 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovnkube-config\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.059219 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-node-log\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.059276 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-node-log" (OuterVolumeSpecName: "node-log") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.059322 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.059448 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.059505 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.059587 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovnkube-script-lib\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060107 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060268 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060350 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-slash\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060412 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-ovn\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060436 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-run-ovn-kubernetes\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060456 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-env-overrides\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060497 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-kubelet\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060449 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-slash" (OuterVolumeSpecName: "host-slash") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060515 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-run-netns\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060482 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060533 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-log-socket\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060568 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-var-lib-openvswitch\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060585 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-openvswitch\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060599 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-etc-openvswitch\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060649 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnjb9\" (UniqueName: \"kubernetes.io/projected/7621f3ab-b09c-4a23-8031-645d96fe5c9b-kube-api-access-xnjb9\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060663 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-systemd-units\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060681 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-cni-bin\") pod \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\" (UID: \"7621f3ab-b09c-4a23-8031-645d96fe5c9b\") " Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060565 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060591 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060619 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060647 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060667 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-log-socket" (OuterVolumeSpecName: "log-socket") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060692 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060734 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060748 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060762 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-kubelet\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060774 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060776 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060803 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-run-ovn\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060826 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-run-ovn-kubernetes\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060849 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-cni-netd\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060884 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-run-netns\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.060957 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-cni-bin\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061006 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-node-log\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061034 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/73290130-a373-4110-8942-7a92d143d977-ovnkube-script-lib\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061060 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-slash\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061106 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-log-socket\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061184 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-run-systemd\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061232 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-var-lib-openvswitch\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061255 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-systemd-units\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061280 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/73290130-a373-4110-8942-7a92d143d977-ovn-node-metrics-cert\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061304 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/73290130-a373-4110-8942-7a92d143d977-ovnkube-config\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061335 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56dxw\" (UniqueName: \"kubernetes.io/projected/73290130-a373-4110-8942-7a92d143d977-kube-api-access-56dxw\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061374 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061401 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/73290130-a373-4110-8942-7a92d143d977-env-overrides\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061434 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-etc-openvswitch\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061458 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-run-openvswitch\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061513 4895 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061528 4895 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061541 4895 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061553 4895 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061564 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061575 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061588 4895 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061602 4895 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061615 4895 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061625 4895 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061636 4895 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7621f3ab-b09c-4a23-8031-645d96fe5c9b-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061647 4895 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061745 4895 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061806 4895 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061828 4895 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061890 4895 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.061959 4895 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.066894 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7621f3ab-b09c-4a23-8031-645d96fe5c9b-kube-api-access-xnjb9" (OuterVolumeSpecName: "kube-api-access-xnjb9") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "kube-api-access-xnjb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.067541 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.075465 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "7621f3ab-b09c-4a23-8031-645d96fe5c9b" (UID: "7621f3ab-b09c-4a23-8031-645d96fe5c9b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.087350 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovnkube-controller/3.log" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.090229 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovn-acl-logging/0.log" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.090955 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4zc4_7621f3ab-b09c-4a23-8031-645d96fe5c9b/ovn-controller/0.log" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091478 4895 generic.go:334] "Generic (PLEG): container finished" podID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerID="fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80" exitCode=0 Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091515 4895 generic.go:334] "Generic (PLEG): container finished" podID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerID="896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659" exitCode=0 Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091526 4895 generic.go:334] "Generic (PLEG): container finished" podID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerID="0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d" exitCode=0 Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091535 4895 generic.go:334] "Generic (PLEG): container finished" podID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerID="d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef" exitCode=0 Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091545 4895 generic.go:334] "Generic (PLEG): container finished" podID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerID="fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a" exitCode=0 Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091554 4895 generic.go:334] "Generic (PLEG): container finished" podID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerID="43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785" exitCode=0 Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091563 4895 generic.go:334] "Generic (PLEG): container finished" podID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerID="bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034" exitCode=143 Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091580 4895 generic.go:334] "Generic (PLEG): container finished" podID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" containerID="51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1" exitCode=143 Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091604 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091593 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerDied","Data":"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091759 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerDied","Data":"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091798 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerDied","Data":"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091815 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerDied","Data":"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091833 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerDied","Data":"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091850 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerDied","Data":"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091888 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091907 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091943 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091953 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091964 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091971 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091978 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091987 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092018 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092035 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerDied","Data":"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092051 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092061 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092069 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092100 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.091857 4895 scope.go:117] "RemoveContainer" containerID="fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092111 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092195 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092205 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092213 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092223 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092232 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092267 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerDied","Data":"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092285 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092296 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092304 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092312 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092320 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092355 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092363 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092372 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092379 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092387 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092398 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4zc4" event={"ID":"7621f3ab-b09c-4a23-8031-645d96fe5c9b","Type":"ContainerDied","Data":"886fdd8f98afea7698efa10e946e86153a52d6585d4c7ae7db6c0184cacbf33a"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092438 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092450 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092459 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092467 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092474 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092482 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092511 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092522 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092528 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.092535 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.094264 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b4dgj_69ba7dcf-e7a0-4408-983b-09a07851d01c/kube-multus/2.log" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.095165 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b4dgj_69ba7dcf-e7a0-4408-983b-09a07851d01c/kube-multus/1.log" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.095259 4895 generic.go:334] "Generic (PLEG): container finished" podID="69ba7dcf-e7a0-4408-983b-09a07851d01c" containerID="f3b3757319019a832c3ca6eb585f42b40f9081e3da1c5e9129ed33a83bcbd323" exitCode=2 Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.095318 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b4dgj" event={"ID":"69ba7dcf-e7a0-4408-983b-09a07851d01c","Type":"ContainerDied","Data":"f3b3757319019a832c3ca6eb585f42b40f9081e3da1c5e9129ed33a83bcbd323"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.095362 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"115f19a42723357b8ac63e4f01e8d591ea064cf2757203925018f142ddbf1336"} Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.096368 4895 scope.go:117] "RemoveContainer" containerID="f3b3757319019a832c3ca6eb585f42b40f9081e3da1c5e9129ed33a83bcbd323" Jan 29 08:53:05 crc kubenswrapper[4895]: E0129 08:53:05.096888 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-b4dgj_openshift-multus(69ba7dcf-e7a0-4408-983b-09a07851d01c)\"" pod="openshift-multus/multus-b4dgj" podUID="69ba7dcf-e7a0-4408-983b-09a07851d01c" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.114791 4895 scope.go:117] "RemoveContainer" containerID="38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.148407 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4zc4"] Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.149196 4895 scope.go:117] "RemoveContainer" containerID="896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.162599 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4zc4"] Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.162808 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-var-lib-openvswitch\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.162930 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-systemd-units\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.162968 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/73290130-a373-4110-8942-7a92d143d977-ovn-node-metrics-cert\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.162997 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/73290130-a373-4110-8942-7a92d143d977-ovnkube-config\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163020 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56dxw\" (UniqueName: \"kubernetes.io/projected/73290130-a373-4110-8942-7a92d143d977-kube-api-access-56dxw\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163032 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-var-lib-openvswitch\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163078 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163105 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/73290130-a373-4110-8942-7a92d143d977-env-overrides\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163118 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-systemd-units\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163144 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-etc-openvswitch\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163167 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-run-openvswitch\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163262 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-kubelet\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163291 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-run-ovn\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163404 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-etc-openvswitch\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163506 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-kubelet\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163565 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163642 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-run-openvswitch\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163745 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-run-ovn\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163901 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/73290130-a373-4110-8942-7a92d143d977-ovnkube-config\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.164041 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-run-ovn-kubernetes\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.163328 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-run-ovn-kubernetes\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165111 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-cni-netd\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165155 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-cni-bin\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165177 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/73290130-a373-4110-8942-7a92d143d977-env-overrides\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165187 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-run-netns\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165237 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-cni-netd\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165251 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-node-log\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165269 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-cni-bin\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165287 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/73290130-a373-4110-8942-7a92d143d977-ovnkube-script-lib\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165303 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-run-netns\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165327 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-slash\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165360 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-log-socket\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165393 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-run-systemd\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165533 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnjb9\" (UniqueName: \"kubernetes.io/projected/7621f3ab-b09c-4a23-8031-645d96fe5c9b-kube-api-access-xnjb9\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165604 4895 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7621f3ab-b09c-4a23-8031-645d96fe5c9b-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165621 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7621f3ab-b09c-4a23-8031-645d96fe5c9b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165665 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-run-systemd\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165704 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-host-slash\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165329 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-node-log\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.165755 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/73290130-a373-4110-8942-7a92d143d977-log-socket\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.166044 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/73290130-a373-4110-8942-7a92d143d977-ovnkube-script-lib\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.167074 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/73290130-a373-4110-8942-7a92d143d977-ovn-node-metrics-cert\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.171117 4895 scope.go:117] "RemoveContainer" containerID="0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.183567 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56dxw\" (UniqueName: \"kubernetes.io/projected/73290130-a373-4110-8942-7a92d143d977-kube-api-access-56dxw\") pod \"ovnkube-node-mc64l\" (UID: \"73290130-a373-4110-8942-7a92d143d977\") " pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.211323 4895 scope.go:117] "RemoveContainer" containerID="d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.221492 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7621f3ab-b09c-4a23-8031-645d96fe5c9b" path="/var/lib/kubelet/pods/7621f3ab-b09c-4a23-8031-645d96fe5c9b/volumes" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.229767 4895 scope.go:117] "RemoveContainer" containerID="fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.245448 4895 scope.go:117] "RemoveContainer" containerID="43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.276616 4895 scope.go:117] "RemoveContainer" containerID="bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.286981 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.292419 4895 scope.go:117] "RemoveContainer" containerID="51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.311225 4895 scope.go:117] "RemoveContainer" containerID="5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a" Jan 29 08:53:05 crc kubenswrapper[4895]: W0129 08:53:05.320280 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73290130_a373_4110_8942_7a92d143d977.slice/crio-45f488276541dc5c63789362263121cce03e5250feb58e94c19a46dc7a62e5f6 WatchSource:0}: Error finding container 45f488276541dc5c63789362263121cce03e5250feb58e94c19a46dc7a62e5f6: Status 404 returned error can't find the container with id 45f488276541dc5c63789362263121cce03e5250feb58e94c19a46dc7a62e5f6 Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.332055 4895 scope.go:117] "RemoveContainer" containerID="fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80" Jan 29 08:53:05 crc kubenswrapper[4895]: E0129 08:53:05.332677 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80\": container with ID starting with fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80 not found: ID does not exist" containerID="fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.332761 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80"} err="failed to get container status \"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80\": rpc error: code = NotFound desc = could not find container \"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80\": container with ID starting with fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.332798 4895 scope.go:117] "RemoveContainer" containerID="38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639" Jan 29 08:53:05 crc kubenswrapper[4895]: E0129 08:53:05.333730 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\": container with ID starting with 38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639 not found: ID does not exist" containerID="38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.333946 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639"} err="failed to get container status \"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\": rpc error: code = NotFound desc = could not find container \"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\": container with ID starting with 38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.333995 4895 scope.go:117] "RemoveContainer" containerID="896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659" Jan 29 08:53:05 crc kubenswrapper[4895]: E0129 08:53:05.347196 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\": container with ID starting with 896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659 not found: ID does not exist" containerID="896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.347299 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659"} err="failed to get container status \"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\": rpc error: code = NotFound desc = could not find container \"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\": container with ID starting with 896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.347343 4895 scope.go:117] "RemoveContainer" containerID="0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d" Jan 29 08:53:05 crc kubenswrapper[4895]: E0129 08:53:05.348012 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\": container with ID starting with 0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d not found: ID does not exist" containerID="0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.348091 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d"} err="failed to get container status \"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\": rpc error: code = NotFound desc = could not find container \"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\": container with ID starting with 0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.348402 4895 scope.go:117] "RemoveContainer" containerID="d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef" Jan 29 08:53:05 crc kubenswrapper[4895]: E0129 08:53:05.351759 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\": container with ID starting with d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef not found: ID does not exist" containerID="d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.351969 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef"} err="failed to get container status \"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\": rpc error: code = NotFound desc = could not find container \"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\": container with ID starting with d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.352066 4895 scope.go:117] "RemoveContainer" containerID="fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a" Jan 29 08:53:05 crc kubenswrapper[4895]: E0129 08:53:05.354147 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\": container with ID starting with fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a not found: ID does not exist" containerID="fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.354292 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a"} err="failed to get container status \"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\": rpc error: code = NotFound desc = could not find container \"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\": container with ID starting with fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.354434 4895 scope.go:117] "RemoveContainer" containerID="43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785" Jan 29 08:53:05 crc kubenswrapper[4895]: E0129 08:53:05.355183 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\": container with ID starting with 43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785 not found: ID does not exist" containerID="43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.355265 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785"} err="failed to get container status \"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\": rpc error: code = NotFound desc = could not find container \"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\": container with ID starting with 43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.355302 4895 scope.go:117] "RemoveContainer" containerID="bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034" Jan 29 08:53:05 crc kubenswrapper[4895]: E0129 08:53:05.355983 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\": container with ID starting with bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034 not found: ID does not exist" containerID="bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.356319 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034"} err="failed to get container status \"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\": rpc error: code = NotFound desc = could not find container \"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\": container with ID starting with bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.356362 4895 scope.go:117] "RemoveContainer" containerID="51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1" Jan 29 08:53:05 crc kubenswrapper[4895]: E0129 08:53:05.357612 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\": container with ID starting with 51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1 not found: ID does not exist" containerID="51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.357727 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1"} err="failed to get container status \"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\": rpc error: code = NotFound desc = could not find container \"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\": container with ID starting with 51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.357786 4895 scope.go:117] "RemoveContainer" containerID="5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a" Jan 29 08:53:05 crc kubenswrapper[4895]: E0129 08:53:05.358548 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\": container with ID starting with 5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a not found: ID does not exist" containerID="5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.358595 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a"} err="failed to get container status \"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\": rpc error: code = NotFound desc = could not find container \"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\": container with ID starting with 5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.358623 4895 scope.go:117] "RemoveContainer" containerID="fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.359525 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80"} err="failed to get container status \"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80\": rpc error: code = NotFound desc = could not find container \"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80\": container with ID starting with fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.359572 4895 scope.go:117] "RemoveContainer" containerID="38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.359998 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639"} err="failed to get container status \"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\": rpc error: code = NotFound desc = could not find container \"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\": container with ID starting with 38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.360017 4895 scope.go:117] "RemoveContainer" containerID="896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.361228 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659"} err="failed to get container status \"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\": rpc error: code = NotFound desc = could not find container \"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\": container with ID starting with 896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.361255 4895 scope.go:117] "RemoveContainer" containerID="0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.361952 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d"} err="failed to get container status \"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\": rpc error: code = NotFound desc = could not find container \"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\": container with ID starting with 0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.361997 4895 scope.go:117] "RemoveContainer" containerID="d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.362298 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef"} err="failed to get container status \"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\": rpc error: code = NotFound desc = could not find container \"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\": container with ID starting with d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.362324 4895 scope.go:117] "RemoveContainer" containerID="fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.362614 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a"} err="failed to get container status \"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\": rpc error: code = NotFound desc = could not find container \"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\": container with ID starting with fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.362639 4895 scope.go:117] "RemoveContainer" containerID="43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.362938 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785"} err="failed to get container status \"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\": rpc error: code = NotFound desc = could not find container \"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\": container with ID starting with 43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.362968 4895 scope.go:117] "RemoveContainer" containerID="bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.363198 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034"} err="failed to get container status \"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\": rpc error: code = NotFound desc = could not find container \"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\": container with ID starting with bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.363223 4895 scope.go:117] "RemoveContainer" containerID="51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.363493 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1"} err="failed to get container status \"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\": rpc error: code = NotFound desc = could not find container \"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\": container with ID starting with 51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.363519 4895 scope.go:117] "RemoveContainer" containerID="5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.363758 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a"} err="failed to get container status \"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\": rpc error: code = NotFound desc = could not find container \"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\": container with ID starting with 5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.363783 4895 scope.go:117] "RemoveContainer" containerID="fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.364036 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80"} err="failed to get container status \"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80\": rpc error: code = NotFound desc = could not find container \"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80\": container with ID starting with fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.364057 4895 scope.go:117] "RemoveContainer" containerID="38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.364304 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639"} err="failed to get container status \"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\": rpc error: code = NotFound desc = could not find container \"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\": container with ID starting with 38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.364337 4895 scope.go:117] "RemoveContainer" containerID="896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.364552 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659"} err="failed to get container status \"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\": rpc error: code = NotFound desc = could not find container \"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\": container with ID starting with 896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.364576 4895 scope.go:117] "RemoveContainer" containerID="0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.364789 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d"} err="failed to get container status \"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\": rpc error: code = NotFound desc = could not find container \"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\": container with ID starting with 0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.364813 4895 scope.go:117] "RemoveContainer" containerID="d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.365056 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef"} err="failed to get container status \"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\": rpc error: code = NotFound desc = could not find container \"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\": container with ID starting with d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.365081 4895 scope.go:117] "RemoveContainer" containerID="fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.365540 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a"} err="failed to get container status \"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\": rpc error: code = NotFound desc = could not find container \"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\": container with ID starting with fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.365606 4895 scope.go:117] "RemoveContainer" containerID="43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.366126 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785"} err="failed to get container status \"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\": rpc error: code = NotFound desc = could not find container \"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\": container with ID starting with 43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.366185 4895 scope.go:117] "RemoveContainer" containerID="bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.366596 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034"} err="failed to get container status \"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\": rpc error: code = NotFound desc = could not find container \"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\": container with ID starting with bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.366622 4895 scope.go:117] "RemoveContainer" containerID="51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.366907 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1"} err="failed to get container status \"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\": rpc error: code = NotFound desc = could not find container \"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\": container with ID starting with 51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.366946 4895 scope.go:117] "RemoveContainer" containerID="5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.367216 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a"} err="failed to get container status \"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\": rpc error: code = NotFound desc = could not find container \"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\": container with ID starting with 5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.367263 4895 scope.go:117] "RemoveContainer" containerID="fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.367520 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80"} err="failed to get container status \"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80\": rpc error: code = NotFound desc = could not find container \"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80\": container with ID starting with fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.367545 4895 scope.go:117] "RemoveContainer" containerID="38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.367783 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639"} err="failed to get container status \"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\": rpc error: code = NotFound desc = could not find container \"38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639\": container with ID starting with 38f89287402c406a93ec24a4a199bc6b7a9b2a82f8c2ede621899b2ffc261639 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.367806 4895 scope.go:117] "RemoveContainer" containerID="896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.368083 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659"} err="failed to get container status \"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\": rpc error: code = NotFound desc = could not find container \"896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659\": container with ID starting with 896da99f627eb61a5a113525dadc69770414dc8899916db6d6fb6ab16f2d6659 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.368110 4895 scope.go:117] "RemoveContainer" containerID="0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.368449 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d"} err="failed to get container status \"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\": rpc error: code = NotFound desc = could not find container \"0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d\": container with ID starting with 0ab4660e29e464a3fda9097173c771417d50e969adde4d70aed68704e7bc4b5d not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.368474 4895 scope.go:117] "RemoveContainer" containerID="d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.368792 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef"} err="failed to get container status \"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\": rpc error: code = NotFound desc = could not find container \"d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef\": container with ID starting with d598776520408cea2800a46e4877f432365453944545dce38c496cd2272009ef not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.368842 4895 scope.go:117] "RemoveContainer" containerID="fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.369264 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a"} err="failed to get container status \"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\": rpc error: code = NotFound desc = could not find container \"fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a\": container with ID starting with fdb2ddca6a2c0056d955c2fceb0e79d7c0adcecb519b63ae8b8e067ee10e185a not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.369288 4895 scope.go:117] "RemoveContainer" containerID="43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.370021 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785"} err="failed to get container status \"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\": rpc error: code = NotFound desc = could not find container \"43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785\": container with ID starting with 43c36e22f71080ee5a27c272d7ff0dfe7325b01c45af268a1e5cee114802f785 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.370045 4895 scope.go:117] "RemoveContainer" containerID="bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.370469 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034"} err="failed to get container status \"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\": rpc error: code = NotFound desc = could not find container \"bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034\": container with ID starting with bc981d759c0cbe537cd74b9e3aff9db11446068c054cc55ac74fddacfd697034 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.370494 4895 scope.go:117] "RemoveContainer" containerID="51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.370825 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1"} err="failed to get container status \"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\": rpc error: code = NotFound desc = could not find container \"51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1\": container with ID starting with 51882d8723debf5ca50acc22db7bbda5c95e66c82cea2678263c7bc24ef7faa1 not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.370851 4895 scope.go:117] "RemoveContainer" containerID="5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.371172 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a"} err="failed to get container status \"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\": rpc error: code = NotFound desc = could not find container \"5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a\": container with ID starting with 5ce2bd031d3e77123cf96f1cda535bcc24bd60bded0e81aeadd670365798dd8a not found: ID does not exist" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.371199 4895 scope.go:117] "RemoveContainer" containerID="fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80" Jan 29 08:53:05 crc kubenswrapper[4895]: I0129 08:53:05.371478 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80"} err="failed to get container status \"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80\": rpc error: code = NotFound desc = could not find container \"fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80\": container with ID starting with fd80f00604561becdf0de336875c154269e1ac6356be1bdf6f21fdaaf7eeca80 not found: ID does not exist" Jan 29 08:53:06 crc kubenswrapper[4895]: I0129 08:53:06.106986 4895 generic.go:334] "Generic (PLEG): container finished" podID="73290130-a373-4110-8942-7a92d143d977" containerID="3bbb205e557e9ae34c8788e484d5b6d6e47e30bd793a4d967385e252b8b03625" exitCode=0 Jan 29 08:53:06 crc kubenswrapper[4895]: I0129 08:53:06.107032 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" event={"ID":"73290130-a373-4110-8942-7a92d143d977","Type":"ContainerDied","Data":"3bbb205e557e9ae34c8788e484d5b6d6e47e30bd793a4d967385e252b8b03625"} Jan 29 08:53:06 crc kubenswrapper[4895]: I0129 08:53:06.107060 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" event={"ID":"73290130-a373-4110-8942-7a92d143d977","Type":"ContainerStarted","Data":"45f488276541dc5c63789362263121cce03e5250feb58e94c19a46dc7a62e5f6"} Jan 29 08:53:07 crc kubenswrapper[4895]: I0129 08:53:07.115821 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" event={"ID":"73290130-a373-4110-8942-7a92d143d977","Type":"ContainerStarted","Data":"bf1e9316cb2d679257dbddf2a003aede877f927f964f35e0001e2e0ceb083176"} Jan 29 08:53:07 crc kubenswrapper[4895]: I0129 08:53:07.116238 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" event={"ID":"73290130-a373-4110-8942-7a92d143d977","Type":"ContainerStarted","Data":"c92f73ede623b9d74d0767305a2af7332a34bdecb910c681095c818b70151e68"} Jan 29 08:53:07 crc kubenswrapper[4895]: I0129 08:53:07.116251 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" event={"ID":"73290130-a373-4110-8942-7a92d143d977","Type":"ContainerStarted","Data":"7c3f54d1cc7bd0c715a395a05a94ad6d1aab0ae52d6a3c211ee7d163436f0c2b"} Jan 29 08:53:07 crc kubenswrapper[4895]: I0129 08:53:07.116261 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" event={"ID":"73290130-a373-4110-8942-7a92d143d977","Type":"ContainerStarted","Data":"5d757e1580da5e0a18b72fee39b70d1ae534b2cbbcd3bfcfa45b6cf7acd7538f"} Jan 29 08:53:08 crc kubenswrapper[4895]: I0129 08:53:08.126199 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" event={"ID":"73290130-a373-4110-8942-7a92d143d977","Type":"ContainerStarted","Data":"e747a4ad9c3351f4f1ed0e3b61d7278ceb42d8fb730a1ce6a24bcb302a713dbe"} Jan 29 08:53:08 crc kubenswrapper[4895]: I0129 08:53:08.126252 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" event={"ID":"73290130-a373-4110-8942-7a92d143d977","Type":"ContainerStarted","Data":"fd2c49b0b6dd9907117cc7a0973b5ebe1996f468f2f3cee49cdfb6086634cdc5"} Jan 29 08:53:10 crc kubenswrapper[4895]: I0129 08:53:10.160222 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" event={"ID":"73290130-a373-4110-8942-7a92d143d977","Type":"ContainerStarted","Data":"ad10e3876daa82a2854f72b44c06e75c3017835b866cce4781db3a39bc81c4a6"} Jan 29 08:53:10 crc kubenswrapper[4895]: I0129 08:53:10.540880 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-ttlgx" Jan 29 08:53:13 crc kubenswrapper[4895]: I0129 08:53:13.187258 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" event={"ID":"73290130-a373-4110-8942-7a92d143d977","Type":"ContainerStarted","Data":"4a74244f913afd894e20e34d7e9ddb3b7d66edda0a8bd48318168b894e8e2960"} Jan 29 08:53:13 crc kubenswrapper[4895]: I0129 08:53:13.188062 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:13 crc kubenswrapper[4895]: I0129 08:53:13.188079 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:13 crc kubenswrapper[4895]: I0129 08:53:13.225569 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" podStartSLOduration=9.22554195 podStartE2EDuration="9.22554195s" podCreationTimestamp="2026-01-29 08:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:53:13.223481474 +0000 UTC m=+734.864989630" watchObservedRunningTime="2026-01-29 08:53:13.22554195 +0000 UTC m=+734.867050096" Jan 29 08:53:13 crc kubenswrapper[4895]: I0129 08:53:13.233993 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:14 crc kubenswrapper[4895]: I0129 08:53:14.193377 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:14 crc kubenswrapper[4895]: I0129 08:53:14.275674 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:16 crc kubenswrapper[4895]: I0129 08:53:16.021566 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:53:16 crc kubenswrapper[4895]: I0129 08:53:16.022057 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:53:16 crc kubenswrapper[4895]: I0129 08:53:16.211802 4895 scope.go:117] "RemoveContainer" containerID="f3b3757319019a832c3ca6eb585f42b40f9081e3da1c5e9129ed33a83bcbd323" Jan 29 08:53:17 crc kubenswrapper[4895]: I0129 08:53:17.213769 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b4dgj_69ba7dcf-e7a0-4408-983b-09a07851d01c/kube-multus/2.log" Jan 29 08:53:17 crc kubenswrapper[4895]: I0129 08:53:17.214847 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b4dgj_69ba7dcf-e7a0-4408-983b-09a07851d01c/kube-multus/1.log" Jan 29 08:53:17 crc kubenswrapper[4895]: I0129 08:53:17.219949 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b4dgj" event={"ID":"69ba7dcf-e7a0-4408-983b-09a07851d01c","Type":"ContainerStarted","Data":"b9500285c336d06ce2d1fafd3b027cf75a7a5ad8cbba63a0f0ab3817d90311b7"} Jan 29 08:53:35 crc kubenswrapper[4895]: I0129 08:53:35.317834 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mc64l" Jan 29 08:53:46 crc kubenswrapper[4895]: I0129 08:53:46.020579 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:53:46 crc kubenswrapper[4895]: I0129 08:53:46.021515 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.138410 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt"] Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.140160 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.143068 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.153405 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt"] Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.261037 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31f401a0-5ab9-427e-a086-8099fe28462f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt\" (UID: \"31f401a0-5ab9-427e-a086-8099fe28462f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.261113 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9tfx\" (UniqueName: \"kubernetes.io/projected/31f401a0-5ab9-427e-a086-8099fe28462f-kube-api-access-w9tfx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt\" (UID: \"31f401a0-5ab9-427e-a086-8099fe28462f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.261377 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31f401a0-5ab9-427e-a086-8099fe28462f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt\" (UID: \"31f401a0-5ab9-427e-a086-8099fe28462f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.363285 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31f401a0-5ab9-427e-a086-8099fe28462f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt\" (UID: \"31f401a0-5ab9-427e-a086-8099fe28462f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.363350 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31f401a0-5ab9-427e-a086-8099fe28462f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt\" (UID: \"31f401a0-5ab9-427e-a086-8099fe28462f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.363384 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9tfx\" (UniqueName: \"kubernetes.io/projected/31f401a0-5ab9-427e-a086-8099fe28462f-kube-api-access-w9tfx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt\" (UID: \"31f401a0-5ab9-427e-a086-8099fe28462f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.363957 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31f401a0-5ab9-427e-a086-8099fe28462f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt\" (UID: \"31f401a0-5ab9-427e-a086-8099fe28462f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.364255 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31f401a0-5ab9-427e-a086-8099fe28462f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt\" (UID: \"31f401a0-5ab9-427e-a086-8099fe28462f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.390098 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9tfx\" (UniqueName: \"kubernetes.io/projected/31f401a0-5ab9-427e-a086-8099fe28462f-kube-api-access-w9tfx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt\" (UID: \"31f401a0-5ab9-427e-a086-8099fe28462f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.499848 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.577121 4895 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 08:53:50 crc kubenswrapper[4895]: I0129 08:53:50.783991 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt"] Jan 29 08:53:51 crc kubenswrapper[4895]: I0129 08:53:51.413203 4895 generic.go:334] "Generic (PLEG): container finished" podID="31f401a0-5ab9-427e-a086-8099fe28462f" containerID="5d30ef3bafced9a6815bf9bbe0c187f3dc0e5e0975472f14da16d214084c33e5" exitCode=0 Jan 29 08:53:51 crc kubenswrapper[4895]: I0129 08:53:51.413440 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" event={"ID":"31f401a0-5ab9-427e-a086-8099fe28462f","Type":"ContainerDied","Data":"5d30ef3bafced9a6815bf9bbe0c187f3dc0e5e0975472f14da16d214084c33e5"} Jan 29 08:53:51 crc kubenswrapper[4895]: I0129 08:53:51.414847 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" event={"ID":"31f401a0-5ab9-427e-a086-8099fe28462f","Type":"ContainerStarted","Data":"f1bc5bd591076a502243f452b3712a2c116256ece1ee17191925e466828f3f1a"} Jan 29 08:53:51 crc kubenswrapper[4895]: I0129 08:53:51.895494 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xsw89"] Jan 29 08:53:51 crc kubenswrapper[4895]: I0129 08:53:51.896946 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:53:51 crc kubenswrapper[4895]: I0129 08:53:51.909559 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xsw89"] Jan 29 08:53:51 crc kubenswrapper[4895]: I0129 08:53:51.996217 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq4r2\" (UniqueName: \"kubernetes.io/projected/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-kube-api-access-xq4r2\") pod \"redhat-operators-xsw89\" (UID: \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\") " pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:53:51 crc kubenswrapper[4895]: I0129 08:53:51.996634 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-catalog-content\") pod \"redhat-operators-xsw89\" (UID: \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\") " pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:53:51 crc kubenswrapper[4895]: I0129 08:53:51.996713 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-utilities\") pod \"redhat-operators-xsw89\" (UID: \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\") " pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:53:52 crc kubenswrapper[4895]: I0129 08:53:52.097763 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-utilities\") pod \"redhat-operators-xsw89\" (UID: \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\") " pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:53:52 crc kubenswrapper[4895]: I0129 08:53:52.097841 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq4r2\" (UniqueName: \"kubernetes.io/projected/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-kube-api-access-xq4r2\") pod \"redhat-operators-xsw89\" (UID: \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\") " pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:53:52 crc kubenswrapper[4895]: I0129 08:53:52.097875 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-catalog-content\") pod \"redhat-operators-xsw89\" (UID: \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\") " pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:53:52 crc kubenswrapper[4895]: I0129 08:53:52.098352 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-utilities\") pod \"redhat-operators-xsw89\" (UID: \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\") " pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:53:52 crc kubenswrapper[4895]: I0129 08:53:52.098561 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-catalog-content\") pod \"redhat-operators-xsw89\" (UID: \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\") " pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:53:52 crc kubenswrapper[4895]: I0129 08:53:52.135659 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq4r2\" (UniqueName: \"kubernetes.io/projected/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-kube-api-access-xq4r2\") pod \"redhat-operators-xsw89\" (UID: \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\") " pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:53:52 crc kubenswrapper[4895]: I0129 08:53:52.215447 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:53:52 crc kubenswrapper[4895]: I0129 08:53:52.500261 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xsw89"] Jan 29 08:53:52 crc kubenswrapper[4895]: W0129 08:53:52.511323 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86ff3ca9_bfe3_43a1_ba39_63635ca905e3.slice/crio-5d070f119107108e1d80ed270d7eea0a90a41508337f67aeefa81aa7df5fd2de WatchSource:0}: Error finding container 5d070f119107108e1d80ed270d7eea0a90a41508337f67aeefa81aa7df5fd2de: Status 404 returned error can't find the container with id 5d070f119107108e1d80ed270d7eea0a90a41508337f67aeefa81aa7df5fd2de Jan 29 08:53:53 crc kubenswrapper[4895]: I0129 08:53:53.434462 4895 generic.go:334] "Generic (PLEG): container finished" podID="86ff3ca9-bfe3-43a1-ba39-63635ca905e3" containerID="cb96db730431521a993b2c6a0d974d2829d8d681d8da74a4164e0fb84070ed75" exitCode=0 Jan 29 08:53:53 crc kubenswrapper[4895]: I0129 08:53:53.434605 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsw89" event={"ID":"86ff3ca9-bfe3-43a1-ba39-63635ca905e3","Type":"ContainerDied","Data":"cb96db730431521a993b2c6a0d974d2829d8d681d8da74a4164e0fb84070ed75"} Jan 29 08:53:53 crc kubenswrapper[4895]: I0129 08:53:53.435074 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsw89" event={"ID":"86ff3ca9-bfe3-43a1-ba39-63635ca905e3","Type":"ContainerStarted","Data":"5d070f119107108e1d80ed270d7eea0a90a41508337f67aeefa81aa7df5fd2de"} Jan 29 08:53:53 crc kubenswrapper[4895]: I0129 08:53:53.437280 4895 generic.go:334] "Generic (PLEG): container finished" podID="31f401a0-5ab9-427e-a086-8099fe28462f" containerID="4394aab11f88f272fe34fd434b07f8415252ccaff79d903d3cbaa0c965aa1423" exitCode=0 Jan 29 08:53:53 crc kubenswrapper[4895]: I0129 08:53:53.437333 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" event={"ID":"31f401a0-5ab9-427e-a086-8099fe28462f","Type":"ContainerDied","Data":"4394aab11f88f272fe34fd434b07f8415252ccaff79d903d3cbaa0c965aa1423"} Jan 29 08:53:54 crc kubenswrapper[4895]: I0129 08:53:54.445416 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsw89" event={"ID":"86ff3ca9-bfe3-43a1-ba39-63635ca905e3","Type":"ContainerStarted","Data":"50c2fd765c396456aef5fbe4880901e223c615427cfb6acbaa35ce94b72970f3"} Jan 29 08:53:54 crc kubenswrapper[4895]: I0129 08:53:54.448142 4895 generic.go:334] "Generic (PLEG): container finished" podID="31f401a0-5ab9-427e-a086-8099fe28462f" containerID="136c615d07a1814e13f0ea679f105b21a77a8ad7379221511c3cd876aad12366" exitCode=0 Jan 29 08:53:54 crc kubenswrapper[4895]: I0129 08:53:54.448204 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" event={"ID":"31f401a0-5ab9-427e-a086-8099fe28462f","Type":"ContainerDied","Data":"136c615d07a1814e13f0ea679f105b21a77a8ad7379221511c3cd876aad12366"} Jan 29 08:53:55 crc kubenswrapper[4895]: I0129 08:53:55.762551 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" Jan 29 08:53:55 crc kubenswrapper[4895]: I0129 08:53:55.848814 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9tfx\" (UniqueName: \"kubernetes.io/projected/31f401a0-5ab9-427e-a086-8099fe28462f-kube-api-access-w9tfx\") pod \"31f401a0-5ab9-427e-a086-8099fe28462f\" (UID: \"31f401a0-5ab9-427e-a086-8099fe28462f\") " Jan 29 08:53:55 crc kubenswrapper[4895]: I0129 08:53:55.848883 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31f401a0-5ab9-427e-a086-8099fe28462f-bundle\") pod \"31f401a0-5ab9-427e-a086-8099fe28462f\" (UID: \"31f401a0-5ab9-427e-a086-8099fe28462f\") " Jan 29 08:53:55 crc kubenswrapper[4895]: I0129 08:53:55.849052 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31f401a0-5ab9-427e-a086-8099fe28462f-util\") pod \"31f401a0-5ab9-427e-a086-8099fe28462f\" (UID: \"31f401a0-5ab9-427e-a086-8099fe28462f\") " Jan 29 08:53:55 crc kubenswrapper[4895]: I0129 08:53:55.849647 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31f401a0-5ab9-427e-a086-8099fe28462f-bundle" (OuterVolumeSpecName: "bundle") pod "31f401a0-5ab9-427e-a086-8099fe28462f" (UID: "31f401a0-5ab9-427e-a086-8099fe28462f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:53:55 crc kubenswrapper[4895]: I0129 08:53:55.856106 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f401a0-5ab9-427e-a086-8099fe28462f-kube-api-access-w9tfx" (OuterVolumeSpecName: "kube-api-access-w9tfx") pod "31f401a0-5ab9-427e-a086-8099fe28462f" (UID: "31f401a0-5ab9-427e-a086-8099fe28462f"). InnerVolumeSpecName "kube-api-access-w9tfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:53:55 crc kubenswrapper[4895]: I0129 08:53:55.863318 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31f401a0-5ab9-427e-a086-8099fe28462f-util" (OuterVolumeSpecName: "util") pod "31f401a0-5ab9-427e-a086-8099fe28462f" (UID: "31f401a0-5ab9-427e-a086-8099fe28462f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:53:55 crc kubenswrapper[4895]: I0129 08:53:55.950529 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9tfx\" (UniqueName: \"kubernetes.io/projected/31f401a0-5ab9-427e-a086-8099fe28462f-kube-api-access-w9tfx\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:55 crc kubenswrapper[4895]: I0129 08:53:55.950586 4895 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31f401a0-5ab9-427e-a086-8099fe28462f-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:55 crc kubenswrapper[4895]: I0129 08:53:55.950597 4895 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31f401a0-5ab9-427e-a086-8099fe28462f-util\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:56 crc kubenswrapper[4895]: I0129 08:53:56.465053 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" Jan 29 08:53:56 crc kubenswrapper[4895]: I0129 08:53:56.465053 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt" event={"ID":"31f401a0-5ab9-427e-a086-8099fe28462f","Type":"ContainerDied","Data":"f1bc5bd591076a502243f452b3712a2c116256ece1ee17191925e466828f3f1a"} Jan 29 08:53:56 crc kubenswrapper[4895]: I0129 08:53:56.465222 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1bc5bd591076a502243f452b3712a2c116256ece1ee17191925e466828f3f1a" Jan 29 08:53:56 crc kubenswrapper[4895]: I0129 08:53:56.467653 4895 generic.go:334] "Generic (PLEG): container finished" podID="86ff3ca9-bfe3-43a1-ba39-63635ca905e3" containerID="50c2fd765c396456aef5fbe4880901e223c615427cfb6acbaa35ce94b72970f3" exitCode=0 Jan 29 08:53:56 crc kubenswrapper[4895]: I0129 08:53:56.467695 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsw89" event={"ID":"86ff3ca9-bfe3-43a1-ba39-63635ca905e3","Type":"ContainerDied","Data":"50c2fd765c396456aef5fbe4880901e223c615427cfb6acbaa35ce94b72970f3"} Jan 29 08:53:57 crc kubenswrapper[4895]: I0129 08:53:57.477651 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsw89" event={"ID":"86ff3ca9-bfe3-43a1-ba39-63635ca905e3","Type":"ContainerStarted","Data":"dcd658ac8d2f4c59295139d3ebdf0a4d735bd483c4f89c7c0f9afc2d9ee27d65"} Jan 29 08:53:57 crc kubenswrapper[4895]: I0129 08:53:57.497942 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xsw89" podStartSLOduration=3.046894375 podStartE2EDuration="6.497898626s" podCreationTimestamp="2026-01-29 08:53:51 +0000 UTC" firstStartedPulling="2026-01-29 08:53:53.437204199 +0000 UTC m=+775.078712345" lastFinishedPulling="2026-01-29 08:53:56.88820845 +0000 UTC m=+778.529716596" observedRunningTime="2026-01-29 08:53:57.494454375 +0000 UTC m=+779.135962521" watchObservedRunningTime="2026-01-29 08:53:57.497898626 +0000 UTC m=+779.139406772" Jan 29 08:53:59 crc kubenswrapper[4895]: I0129 08:53:59.644755 4895 scope.go:117] "RemoveContainer" containerID="115f19a42723357b8ac63e4f01e8d591ea064cf2757203925018f142ddbf1336" Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.499167 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b4dgj_69ba7dcf-e7a0-4408-983b-09a07851d01c/kube-multus/2.log" Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.601815 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-nbgdz"] Jan 29 08:54:00 crc kubenswrapper[4895]: E0129 08:54:00.602069 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f401a0-5ab9-427e-a086-8099fe28462f" containerName="extract" Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.602086 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f401a0-5ab9-427e-a086-8099fe28462f" containerName="extract" Jan 29 08:54:00 crc kubenswrapper[4895]: E0129 08:54:00.602100 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f401a0-5ab9-427e-a086-8099fe28462f" containerName="pull" Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.602106 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f401a0-5ab9-427e-a086-8099fe28462f" containerName="pull" Jan 29 08:54:00 crc kubenswrapper[4895]: E0129 08:54:00.602123 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f401a0-5ab9-427e-a086-8099fe28462f" containerName="util" Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.602130 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f401a0-5ab9-427e-a086-8099fe28462f" containerName="util" Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.602239 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f401a0-5ab9-427e-a086-8099fe28462f" containerName="extract" Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.602661 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-nbgdz" Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.604485 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.605143 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.605313 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2s457" Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.626075 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-nbgdz"] Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.716107 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6fb4\" (UniqueName: \"kubernetes.io/projected/686e1923-3a25-460b-b2f1-636cd6039ffe-kube-api-access-n6fb4\") pod \"nmstate-operator-646758c888-nbgdz\" (UID: \"686e1923-3a25-460b-b2f1-636cd6039ffe\") " pod="openshift-nmstate/nmstate-operator-646758c888-nbgdz" Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.817419 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6fb4\" (UniqueName: \"kubernetes.io/projected/686e1923-3a25-460b-b2f1-636cd6039ffe-kube-api-access-n6fb4\") pod \"nmstate-operator-646758c888-nbgdz\" (UID: \"686e1923-3a25-460b-b2f1-636cd6039ffe\") " pod="openshift-nmstate/nmstate-operator-646758c888-nbgdz" Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.839900 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6fb4\" (UniqueName: \"kubernetes.io/projected/686e1923-3a25-460b-b2f1-636cd6039ffe-kube-api-access-n6fb4\") pod \"nmstate-operator-646758c888-nbgdz\" (UID: \"686e1923-3a25-460b-b2f1-636cd6039ffe\") " pod="openshift-nmstate/nmstate-operator-646758c888-nbgdz" Jan 29 08:54:00 crc kubenswrapper[4895]: I0129 08:54:00.916997 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-nbgdz" Jan 29 08:54:01 crc kubenswrapper[4895]: I0129 08:54:01.165093 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-nbgdz"] Jan 29 08:54:01 crc kubenswrapper[4895]: I0129 08:54:01.507529 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-nbgdz" event={"ID":"686e1923-3a25-460b-b2f1-636cd6039ffe","Type":"ContainerStarted","Data":"fcf783a69de066f83da4febe63591188184fca4601ac6cea926c00fd61218be6"} Jan 29 08:54:02 crc kubenswrapper[4895]: I0129 08:54:02.215747 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:54:02 crc kubenswrapper[4895]: I0129 08:54:02.216586 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:54:03 crc kubenswrapper[4895]: I0129 08:54:03.263033 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xsw89" podUID="86ff3ca9-bfe3-43a1-ba39-63635ca905e3" containerName="registry-server" probeResult="failure" output=< Jan 29 08:54:03 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 08:54:03 crc kubenswrapper[4895]: > Jan 29 08:54:03 crc kubenswrapper[4895]: I0129 08:54:03.536464 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-nbgdz" event={"ID":"686e1923-3a25-460b-b2f1-636cd6039ffe","Type":"ContainerStarted","Data":"bf6b88cdeab25061e81151b083dd8a81ade849da934ca4c4534e37da0034aade"} Jan 29 08:54:03 crc kubenswrapper[4895]: I0129 08:54:03.555583 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-nbgdz" podStartSLOduration=1.392988748 podStartE2EDuration="3.55556507s" podCreationTimestamp="2026-01-29 08:54:00 +0000 UTC" firstStartedPulling="2026-01-29 08:54:01.177937094 +0000 UTC m=+782.819445240" lastFinishedPulling="2026-01-29 08:54:03.340513416 +0000 UTC m=+784.982021562" observedRunningTime="2026-01-29 08:54:03.554035159 +0000 UTC m=+785.195543315" watchObservedRunningTime="2026-01-29 08:54:03.55556507 +0000 UTC m=+785.197073216" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.238831 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-qg2h4"] Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.240590 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-qg2h4" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.244298 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jrtlm" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.249485 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl"] Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.250473 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.252984 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.261879 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9rsm\" (UniqueName: \"kubernetes.io/projected/2ce6529a-8832-46df-b211-7d7f2388214b-kube-api-access-j9rsm\") pod \"nmstate-metrics-54757c584b-qg2h4\" (UID: \"2ce6529a-8832-46df-b211-7d7f2388214b\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-qg2h4" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.263748 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-qg2h4"] Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.274347 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl"] Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.290007 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-62g2t"] Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.291087 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.364370 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9rsm\" (UniqueName: \"kubernetes.io/projected/2ce6529a-8832-46df-b211-7d7f2388214b-kube-api-access-j9rsm\") pod \"nmstate-metrics-54757c584b-qg2h4\" (UID: \"2ce6529a-8832-46df-b211-7d7f2388214b\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-qg2h4" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.364441 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/df364a5d-82b0-43f6-9e56-fb2fd0fef1e2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-mgwfl\" (UID: \"df364a5d-82b0-43f6-9e56-fb2fd0fef1e2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.364493 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6llb\" (UniqueName: \"kubernetes.io/projected/df364a5d-82b0-43f6-9e56-fb2fd0fef1e2-kube-api-access-g6llb\") pod \"nmstate-webhook-8474b5b9d8-mgwfl\" (UID: \"df364a5d-82b0-43f6-9e56-fb2fd0fef1e2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.364543 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkgjb\" (UniqueName: \"kubernetes.io/projected/2a149626-5a36-418c-b7a2-87ff50e92c34-kube-api-access-zkgjb\") pod \"nmstate-handler-62g2t\" (UID: \"2a149626-5a36-418c-b7a2-87ff50e92c34\") " pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.364639 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2a149626-5a36-418c-b7a2-87ff50e92c34-nmstate-lock\") pod \"nmstate-handler-62g2t\" (UID: \"2a149626-5a36-418c-b7a2-87ff50e92c34\") " pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.364681 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2a149626-5a36-418c-b7a2-87ff50e92c34-ovs-socket\") pod \"nmstate-handler-62g2t\" (UID: \"2a149626-5a36-418c-b7a2-87ff50e92c34\") " pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.364701 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2a149626-5a36-418c-b7a2-87ff50e92c34-dbus-socket\") pod \"nmstate-handler-62g2t\" (UID: \"2a149626-5a36-418c-b7a2-87ff50e92c34\") " pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.401485 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9rsm\" (UniqueName: \"kubernetes.io/projected/2ce6529a-8832-46df-b211-7d7f2388214b-kube-api-access-j9rsm\") pod \"nmstate-metrics-54757c584b-qg2h4\" (UID: \"2ce6529a-8832-46df-b211-7d7f2388214b\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-qg2h4" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.412487 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs"] Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.413859 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.416519 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-97wns" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.417444 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.418311 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.436392 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs"] Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.465961 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5b25585-8953-42bb-a128-13272bda1f87-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-lt6cs\" (UID: \"e5b25585-8953-42bb-a128-13272bda1f87\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.466073 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkgjb\" (UniqueName: \"kubernetes.io/projected/2a149626-5a36-418c-b7a2-87ff50e92c34-kube-api-access-zkgjb\") pod \"nmstate-handler-62g2t\" (UID: \"2a149626-5a36-418c-b7a2-87ff50e92c34\") " pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.466306 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfspg\" (UniqueName: \"kubernetes.io/projected/e5b25585-8953-42bb-a128-13272bda1f87-kube-api-access-vfspg\") pod \"nmstate-console-plugin-7754f76f8b-lt6cs\" (UID: \"e5b25585-8953-42bb-a128-13272bda1f87\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.466475 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2a149626-5a36-418c-b7a2-87ff50e92c34-nmstate-lock\") pod \"nmstate-handler-62g2t\" (UID: \"2a149626-5a36-418c-b7a2-87ff50e92c34\") " pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.466601 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2a149626-5a36-418c-b7a2-87ff50e92c34-ovs-socket\") pod \"nmstate-handler-62g2t\" (UID: \"2a149626-5a36-418c-b7a2-87ff50e92c34\") " pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.466660 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2a149626-5a36-418c-b7a2-87ff50e92c34-nmstate-lock\") pod \"nmstate-handler-62g2t\" (UID: \"2a149626-5a36-418c-b7a2-87ff50e92c34\") " pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.466692 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2a149626-5a36-418c-b7a2-87ff50e92c34-ovs-socket\") pod \"nmstate-handler-62g2t\" (UID: \"2a149626-5a36-418c-b7a2-87ff50e92c34\") " pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.466751 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2a149626-5a36-418c-b7a2-87ff50e92c34-dbus-socket\") pod \"nmstate-handler-62g2t\" (UID: \"2a149626-5a36-418c-b7a2-87ff50e92c34\") " pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.466877 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/df364a5d-82b0-43f6-9e56-fb2fd0fef1e2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-mgwfl\" (UID: \"df364a5d-82b0-43f6-9e56-fb2fd0fef1e2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.466969 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6llb\" (UniqueName: \"kubernetes.io/projected/df364a5d-82b0-43f6-9e56-fb2fd0fef1e2-kube-api-access-g6llb\") pod \"nmstate-webhook-8474b5b9d8-mgwfl\" (UID: \"df364a5d-82b0-43f6-9e56-fb2fd0fef1e2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.467013 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e5b25585-8953-42bb-a128-13272bda1f87-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-lt6cs\" (UID: \"e5b25585-8953-42bb-a128-13272bda1f87\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.467474 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2a149626-5a36-418c-b7a2-87ff50e92c34-dbus-socket\") pod \"nmstate-handler-62g2t\" (UID: \"2a149626-5a36-418c-b7a2-87ff50e92c34\") " pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:10 crc kubenswrapper[4895]: E0129 08:54:10.467606 4895 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 29 08:54:10 crc kubenswrapper[4895]: E0129 08:54:10.467663 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df364a5d-82b0-43f6-9e56-fb2fd0fef1e2-tls-key-pair podName:df364a5d-82b0-43f6-9e56-fb2fd0fef1e2 nodeName:}" failed. No retries permitted until 2026-01-29 08:54:10.967645724 +0000 UTC m=+792.609153860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/df364a5d-82b0-43f6-9e56-fb2fd0fef1e2-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-mgwfl" (UID: "df364a5d-82b0-43f6-9e56-fb2fd0fef1e2") : secret "openshift-nmstate-webhook" not found Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.488142 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkgjb\" (UniqueName: \"kubernetes.io/projected/2a149626-5a36-418c-b7a2-87ff50e92c34-kube-api-access-zkgjb\") pod \"nmstate-handler-62g2t\" (UID: \"2a149626-5a36-418c-b7a2-87ff50e92c34\") " pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.491247 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6llb\" (UniqueName: \"kubernetes.io/projected/df364a5d-82b0-43f6-9e56-fb2fd0fef1e2-kube-api-access-g6llb\") pod \"nmstate-webhook-8474b5b9d8-mgwfl\" (UID: \"df364a5d-82b0-43f6-9e56-fb2fd0fef1e2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.562482 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-qg2h4" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.568615 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfspg\" (UniqueName: \"kubernetes.io/projected/e5b25585-8953-42bb-a128-13272bda1f87-kube-api-access-vfspg\") pod \"nmstate-console-plugin-7754f76f8b-lt6cs\" (UID: \"e5b25585-8953-42bb-a128-13272bda1f87\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.568739 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e5b25585-8953-42bb-a128-13272bda1f87-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-lt6cs\" (UID: \"e5b25585-8953-42bb-a128-13272bda1f87\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.568775 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5b25585-8953-42bb-a128-13272bda1f87-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-lt6cs\" (UID: \"e5b25585-8953-42bb-a128-13272bda1f87\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.569986 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e5b25585-8953-42bb-a128-13272bda1f87-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-lt6cs\" (UID: \"e5b25585-8953-42bb-a128-13272bda1f87\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.576467 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5b25585-8953-42bb-a128-13272bda1f87-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-lt6cs\" (UID: \"e5b25585-8953-42bb-a128-13272bda1f87\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.598886 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfspg\" (UniqueName: \"kubernetes.io/projected/e5b25585-8953-42bb-a128-13272bda1f87-kube-api-access-vfspg\") pod \"nmstate-console-plugin-7754f76f8b-lt6cs\" (UID: \"e5b25585-8953-42bb-a128-13272bda1f87\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.604861 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.639601 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5969ccc7b6-8r7pl"] Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.641983 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: W0129 08:54:10.643899 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a149626_5a36_418c_b7a2_87ff50e92c34.slice/crio-81c88ebd40492056e9531284d04634dd62da16b8b6491206077a9dd1397ce3db WatchSource:0}: Error finding container 81c88ebd40492056e9531284d04634dd62da16b8b6491206077a9dd1397ce3db: Status 404 returned error can't find the container with id 81c88ebd40492056e9531284d04634dd62da16b8b6491206077a9dd1397ce3db Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.649185 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5969ccc7b6-8r7pl"] Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.669860 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2f5895-2172-4835-b54e-4dd757e0cc67-console-serving-cert\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.669932 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8a2f5895-2172-4835-b54e-4dd757e0cc67-console-config\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.669970 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a2f5895-2172-4835-b54e-4dd757e0cc67-trusted-ca-bundle\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.670020 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8a2f5895-2172-4835-b54e-4dd757e0cc67-service-ca\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.670048 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8a2f5895-2172-4835-b54e-4dd757e0cc67-oauth-serving-cert\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.670071 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf78b\" (UniqueName: \"kubernetes.io/projected/8a2f5895-2172-4835-b54e-4dd757e0cc67-kube-api-access-rf78b\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.670146 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8a2f5895-2172-4835-b54e-4dd757e0cc67-console-oauth-config\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.733396 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.774731 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8a2f5895-2172-4835-b54e-4dd757e0cc67-oauth-serving-cert\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.775169 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf78b\" (UniqueName: \"kubernetes.io/projected/8a2f5895-2172-4835-b54e-4dd757e0cc67-kube-api-access-rf78b\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.775237 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8a2f5895-2172-4835-b54e-4dd757e0cc67-console-oauth-config\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.775268 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2f5895-2172-4835-b54e-4dd757e0cc67-console-serving-cert\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.775287 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8a2f5895-2172-4835-b54e-4dd757e0cc67-console-config\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.775307 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a2f5895-2172-4835-b54e-4dd757e0cc67-trusted-ca-bundle\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.775347 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8a2f5895-2172-4835-b54e-4dd757e0cc67-service-ca\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.776743 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8a2f5895-2172-4835-b54e-4dd757e0cc67-service-ca\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.778164 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8a2f5895-2172-4835-b54e-4dd757e0cc67-oauth-serving-cert\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.778780 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a2f5895-2172-4835-b54e-4dd757e0cc67-trusted-ca-bundle\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.779269 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8a2f5895-2172-4835-b54e-4dd757e0cc67-console-config\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.789102 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8a2f5895-2172-4835-b54e-4dd757e0cc67-console-oauth-config\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.791940 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2f5895-2172-4835-b54e-4dd757e0cc67-console-serving-cert\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.798975 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf78b\" (UniqueName: \"kubernetes.io/projected/8a2f5895-2172-4835-b54e-4dd757e0cc67-kube-api-access-rf78b\") pod \"console-5969ccc7b6-8r7pl\" (UID: \"8a2f5895-2172-4835-b54e-4dd757e0cc67\") " pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.914236 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-qg2h4"] Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.964602 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.978930 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/df364a5d-82b0-43f6-9e56-fb2fd0fef1e2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-mgwfl\" (UID: \"df364a5d-82b0-43f6-9e56-fb2fd0fef1e2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl" Jan 29 08:54:10 crc kubenswrapper[4895]: I0129 08:54:10.982269 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/df364a5d-82b0-43f6-9e56-fb2fd0fef1e2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-mgwfl\" (UID: \"df364a5d-82b0-43f6-9e56-fb2fd0fef1e2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl" Jan 29 08:54:11 crc kubenswrapper[4895]: I0129 08:54:11.177107 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl" Jan 29 08:54:11 crc kubenswrapper[4895]: I0129 08:54:11.232764 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs"] Jan 29 08:54:11 crc kubenswrapper[4895]: I0129 08:54:11.266061 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5969ccc7b6-8r7pl"] Jan 29 08:54:11 crc kubenswrapper[4895]: W0129 08:54:11.274369 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a2f5895_2172_4835_b54e_4dd757e0cc67.slice/crio-d3d2a4b7e86dbc5781fbc794519e6465edd8652979455901b3d5730d36102234 WatchSource:0}: Error finding container d3d2a4b7e86dbc5781fbc794519e6465edd8652979455901b3d5730d36102234: Status 404 returned error can't find the container with id d3d2a4b7e86dbc5781fbc794519e6465edd8652979455901b3d5730d36102234 Jan 29 08:54:11 crc kubenswrapper[4895]: I0129 08:54:11.424806 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl"] Jan 29 08:54:11 crc kubenswrapper[4895]: W0129 08:54:11.432685 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf364a5d_82b0_43f6_9e56_fb2fd0fef1e2.slice/crio-f9714373f1d1afea8cf3a8bf0a892f20ee7dcd1ca20f00bcac9c3b2d81e6eb34 WatchSource:0}: Error finding container f9714373f1d1afea8cf3a8bf0a892f20ee7dcd1ca20f00bcac9c3b2d81e6eb34: Status 404 returned error can't find the container with id f9714373f1d1afea8cf3a8bf0a892f20ee7dcd1ca20f00bcac9c3b2d81e6eb34 Jan 29 08:54:11 crc kubenswrapper[4895]: I0129 08:54:11.586653 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-qg2h4" event={"ID":"2ce6529a-8832-46df-b211-7d7f2388214b","Type":"ContainerStarted","Data":"251d6fd24ea614cbfd505ac3d0b3f535de07861cd03ea89bd34423b25e4ccb3f"} Jan 29 08:54:11 crc kubenswrapper[4895]: I0129 08:54:11.589891 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5969ccc7b6-8r7pl" event={"ID":"8a2f5895-2172-4835-b54e-4dd757e0cc67","Type":"ContainerStarted","Data":"8a1f6cb3072d0f432a4b3b8fb875dfa85bbe12e90c26b34aa4ef7a730c66daed"} Jan 29 08:54:11 crc kubenswrapper[4895]: I0129 08:54:11.589961 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5969ccc7b6-8r7pl" event={"ID":"8a2f5895-2172-4835-b54e-4dd757e0cc67","Type":"ContainerStarted","Data":"d3d2a4b7e86dbc5781fbc794519e6465edd8652979455901b3d5730d36102234"} Jan 29 08:54:11 crc kubenswrapper[4895]: I0129 08:54:11.592988 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl" event={"ID":"df364a5d-82b0-43f6-9e56-fb2fd0fef1e2","Type":"ContainerStarted","Data":"f9714373f1d1afea8cf3a8bf0a892f20ee7dcd1ca20f00bcac9c3b2d81e6eb34"} Jan 29 08:54:11 crc kubenswrapper[4895]: I0129 08:54:11.597116 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs" event={"ID":"e5b25585-8953-42bb-a128-13272bda1f87","Type":"ContainerStarted","Data":"bd7cb4e41cdb0f3bf63c72255077b8a97ee46e9d3bb9da13709c21ed9734b643"} Jan 29 08:54:11 crc kubenswrapper[4895]: I0129 08:54:11.598843 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-62g2t" event={"ID":"2a149626-5a36-418c-b7a2-87ff50e92c34","Type":"ContainerStarted","Data":"81c88ebd40492056e9531284d04634dd62da16b8b6491206077a9dd1397ce3db"} Jan 29 08:54:12 crc kubenswrapper[4895]: I0129 08:54:12.279473 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:54:12 crc kubenswrapper[4895]: I0129 08:54:12.301630 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5969ccc7b6-8r7pl" podStartSLOduration=2.30160735 podStartE2EDuration="2.30160735s" podCreationTimestamp="2026-01-29 08:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:54:11.607218581 +0000 UTC m=+793.248726747" watchObservedRunningTime="2026-01-29 08:54:12.30160735 +0000 UTC m=+793.943115496" Jan 29 08:54:12 crc kubenswrapper[4895]: I0129 08:54:12.328854 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:54:12 crc kubenswrapper[4895]: I0129 08:54:12.517490 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xsw89"] Jan 29 08:54:13 crc kubenswrapper[4895]: I0129 08:54:13.619449 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xsw89" podUID="86ff3ca9-bfe3-43a1-ba39-63635ca905e3" containerName="registry-server" containerID="cri-o://dcd658ac8d2f4c59295139d3ebdf0a4d735bd483c4f89c7c0f9afc2d9ee27d65" gracePeriod=2 Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.209687 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.223326 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-catalog-content\") pod \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\" (UID: \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\") " Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.223383 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-utilities\") pod \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\" (UID: \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\") " Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.223477 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq4r2\" (UniqueName: \"kubernetes.io/projected/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-kube-api-access-xq4r2\") pod \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\" (UID: \"86ff3ca9-bfe3-43a1-ba39-63635ca905e3\") " Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.225763 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-utilities" (OuterVolumeSpecName: "utilities") pod "86ff3ca9-bfe3-43a1-ba39-63635ca905e3" (UID: "86ff3ca9-bfe3-43a1-ba39-63635ca905e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.236828 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-kube-api-access-xq4r2" (OuterVolumeSpecName: "kube-api-access-xq4r2") pod "86ff3ca9-bfe3-43a1-ba39-63635ca905e3" (UID: "86ff3ca9-bfe3-43a1-ba39-63635ca905e3"). InnerVolumeSpecName "kube-api-access-xq4r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.325461 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.325527 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq4r2\" (UniqueName: \"kubernetes.io/projected/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-kube-api-access-xq4r2\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.354823 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86ff3ca9-bfe3-43a1-ba39-63635ca905e3" (UID: "86ff3ca9-bfe3-43a1-ba39-63635ca905e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.427074 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86ff3ca9-bfe3-43a1-ba39-63635ca905e3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.631071 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-62g2t" event={"ID":"2a149626-5a36-418c-b7a2-87ff50e92c34","Type":"ContainerStarted","Data":"916a25786c891a8fd2e84f4bf99d582d9372fd5ef4eb9c3b9067079697c0cb1f"} Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.632648 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.639669 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-qg2h4" event={"ID":"2ce6529a-8832-46df-b211-7d7f2388214b","Type":"ContainerStarted","Data":"7d5406ee0d56b6f4faa9a1a65976ff07ced6de6409ff172ef597ea686638b0bb"} Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.643834 4895 generic.go:334] "Generic (PLEG): container finished" podID="86ff3ca9-bfe3-43a1-ba39-63635ca905e3" containerID="dcd658ac8d2f4c59295139d3ebdf0a4d735bd483c4f89c7c0f9afc2d9ee27d65" exitCode=0 Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.643928 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsw89" event={"ID":"86ff3ca9-bfe3-43a1-ba39-63635ca905e3","Type":"ContainerDied","Data":"dcd658ac8d2f4c59295139d3ebdf0a4d735bd483c4f89c7c0f9afc2d9ee27d65"} Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.643969 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsw89" event={"ID":"86ff3ca9-bfe3-43a1-ba39-63635ca905e3","Type":"ContainerDied","Data":"5d070f119107108e1d80ed270d7eea0a90a41508337f67aeefa81aa7df5fd2de"} Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.643968 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsw89" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.643991 4895 scope.go:117] "RemoveContainer" containerID="dcd658ac8d2f4c59295139d3ebdf0a4d735bd483c4f89c7c0f9afc2d9ee27d65" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.647453 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl" event={"ID":"df364a5d-82b0-43f6-9e56-fb2fd0fef1e2","Type":"ContainerStarted","Data":"9778793a8f1eabc2789c2ce4402648ab71a694b26231523f868f1d760b30187e"} Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.648133 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.649396 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs" event={"ID":"e5b25585-8953-42bb-a128-13272bda1f87","Type":"ContainerStarted","Data":"b2f6bb990e885574765438e26ab316d079781325e69d24c480b0b94d1a2d6e59"} Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.663384 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-62g2t" podStartSLOduration=1.261913732 podStartE2EDuration="4.663355664s" podCreationTimestamp="2026-01-29 08:54:10 +0000 UTC" firstStartedPulling="2026-01-29 08:54:10.650910621 +0000 UTC m=+792.292418767" lastFinishedPulling="2026-01-29 08:54:14.052352553 +0000 UTC m=+795.693860699" observedRunningTime="2026-01-29 08:54:14.655357801 +0000 UTC m=+796.296865957" watchObservedRunningTime="2026-01-29 08:54:14.663355664 +0000 UTC m=+796.304863810" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.672201 4895 scope.go:117] "RemoveContainer" containerID="50c2fd765c396456aef5fbe4880901e223c615427cfb6acbaa35ce94b72970f3" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.678552 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lt6cs" podStartSLOduration=1.882324162 podStartE2EDuration="4.678519647s" podCreationTimestamp="2026-01-29 08:54:10 +0000 UTC" firstStartedPulling="2026-01-29 08:54:11.246790869 +0000 UTC m=+792.888299015" lastFinishedPulling="2026-01-29 08:54:14.042986344 +0000 UTC m=+795.684494500" observedRunningTime="2026-01-29 08:54:14.676166214 +0000 UTC m=+796.317674410" watchObservedRunningTime="2026-01-29 08:54:14.678519647 +0000 UTC m=+796.320027793" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.705800 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl" podStartSLOduration=2.0744119149999998 podStartE2EDuration="4.705778713s" podCreationTimestamp="2026-01-29 08:54:10 +0000 UTC" firstStartedPulling="2026-01-29 08:54:11.435431429 +0000 UTC m=+793.076939575" lastFinishedPulling="2026-01-29 08:54:14.066798227 +0000 UTC m=+795.708306373" observedRunningTime="2026-01-29 08:54:14.702438214 +0000 UTC m=+796.343946370" watchObservedRunningTime="2026-01-29 08:54:14.705778713 +0000 UTC m=+796.347286859" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.715497 4895 scope.go:117] "RemoveContainer" containerID="cb96db730431521a993b2c6a0d974d2829d8d681d8da74a4164e0fb84070ed75" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.732186 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xsw89"] Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.737003 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xsw89"] Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.743880 4895 scope.go:117] "RemoveContainer" containerID="dcd658ac8d2f4c59295139d3ebdf0a4d735bd483c4f89c7c0f9afc2d9ee27d65" Jan 29 08:54:14 crc kubenswrapper[4895]: E0129 08:54:14.744841 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcd658ac8d2f4c59295139d3ebdf0a4d735bd483c4f89c7c0f9afc2d9ee27d65\": container with ID starting with dcd658ac8d2f4c59295139d3ebdf0a4d735bd483c4f89c7c0f9afc2d9ee27d65 not found: ID does not exist" containerID="dcd658ac8d2f4c59295139d3ebdf0a4d735bd483c4f89c7c0f9afc2d9ee27d65" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.744946 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcd658ac8d2f4c59295139d3ebdf0a4d735bd483c4f89c7c0f9afc2d9ee27d65"} err="failed to get container status \"dcd658ac8d2f4c59295139d3ebdf0a4d735bd483c4f89c7c0f9afc2d9ee27d65\": rpc error: code = NotFound desc = could not find container \"dcd658ac8d2f4c59295139d3ebdf0a4d735bd483c4f89c7c0f9afc2d9ee27d65\": container with ID starting with dcd658ac8d2f4c59295139d3ebdf0a4d735bd483c4f89c7c0f9afc2d9ee27d65 not found: ID does not exist" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.744989 4895 scope.go:117] "RemoveContainer" containerID="50c2fd765c396456aef5fbe4880901e223c615427cfb6acbaa35ce94b72970f3" Jan 29 08:54:14 crc kubenswrapper[4895]: E0129 08:54:14.745522 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c2fd765c396456aef5fbe4880901e223c615427cfb6acbaa35ce94b72970f3\": container with ID starting with 50c2fd765c396456aef5fbe4880901e223c615427cfb6acbaa35ce94b72970f3 not found: ID does not exist" containerID="50c2fd765c396456aef5fbe4880901e223c615427cfb6acbaa35ce94b72970f3" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.745700 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c2fd765c396456aef5fbe4880901e223c615427cfb6acbaa35ce94b72970f3"} err="failed to get container status \"50c2fd765c396456aef5fbe4880901e223c615427cfb6acbaa35ce94b72970f3\": rpc error: code = NotFound desc = could not find container \"50c2fd765c396456aef5fbe4880901e223c615427cfb6acbaa35ce94b72970f3\": container with ID starting with 50c2fd765c396456aef5fbe4880901e223c615427cfb6acbaa35ce94b72970f3 not found: ID does not exist" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.745754 4895 scope.go:117] "RemoveContainer" containerID="cb96db730431521a993b2c6a0d974d2829d8d681d8da74a4164e0fb84070ed75" Jan 29 08:54:14 crc kubenswrapper[4895]: E0129 08:54:14.746234 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb96db730431521a993b2c6a0d974d2829d8d681d8da74a4164e0fb84070ed75\": container with ID starting with cb96db730431521a993b2c6a0d974d2829d8d681d8da74a4164e0fb84070ed75 not found: ID does not exist" containerID="cb96db730431521a993b2c6a0d974d2829d8d681d8da74a4164e0fb84070ed75" Jan 29 08:54:14 crc kubenswrapper[4895]: I0129 08:54:14.746385 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb96db730431521a993b2c6a0d974d2829d8d681d8da74a4164e0fb84070ed75"} err="failed to get container status \"cb96db730431521a993b2c6a0d974d2829d8d681d8da74a4164e0fb84070ed75\": rpc error: code = NotFound desc = could not find container \"cb96db730431521a993b2c6a0d974d2829d8d681d8da74a4164e0fb84070ed75\": container with ID starting with cb96db730431521a993b2c6a0d974d2829d8d681d8da74a4164e0fb84070ed75 not found: ID does not exist" Jan 29 08:54:15 crc kubenswrapper[4895]: I0129 08:54:15.223055 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86ff3ca9-bfe3-43a1-ba39-63635ca905e3" path="/var/lib/kubelet/pods/86ff3ca9-bfe3-43a1-ba39-63635ca905e3/volumes" Jan 29 08:54:16 crc kubenswrapper[4895]: I0129 08:54:16.021169 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:54:16 crc kubenswrapper[4895]: I0129 08:54:16.021261 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:54:16 crc kubenswrapper[4895]: I0129 08:54:16.021325 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:54:16 crc kubenswrapper[4895]: I0129 08:54:16.022146 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"196dd09f37b20983a231714c51e3920c9238c0dcfbe938ccc9dfef7054a9c34d"} pod="openshift-machine-config-operator/machine-config-daemon-z82hk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:54:16 crc kubenswrapper[4895]: I0129 08:54:16.022224 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" containerID="cri-o://196dd09f37b20983a231714c51e3920c9238c0dcfbe938ccc9dfef7054a9c34d" gracePeriod=600 Jan 29 08:54:16 crc kubenswrapper[4895]: I0129 08:54:16.668208 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerID="196dd09f37b20983a231714c51e3920c9238c0dcfbe938ccc9dfef7054a9c34d" exitCode=0 Jan 29 08:54:16 crc kubenswrapper[4895]: I0129 08:54:16.668262 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerDied","Data":"196dd09f37b20983a231714c51e3920c9238c0dcfbe938ccc9dfef7054a9c34d"} Jan 29 08:54:16 crc kubenswrapper[4895]: I0129 08:54:16.668945 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerStarted","Data":"e283faf84652d2e1164b1f178cfd437682bdf8b7e6ce6e055041db42bca73378"} Jan 29 08:54:16 crc kubenswrapper[4895]: I0129 08:54:16.668984 4895 scope.go:117] "RemoveContainer" containerID="aa1a317827baf23906951927e8f8b8c0dda6533f94b12af3d4987c30b71139e1" Jan 29 08:54:17 crc kubenswrapper[4895]: I0129 08:54:17.679204 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-qg2h4" event={"ID":"2ce6529a-8832-46df-b211-7d7f2388214b","Type":"ContainerStarted","Data":"8d124e4bcc2d00cde8d8ca1a00bb19f3210468e8b2a347201a7511f9876a0e49"} Jan 29 08:54:17 crc kubenswrapper[4895]: I0129 08:54:17.698675 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-qg2h4" podStartSLOduration=1.6026317479999999 podStartE2EDuration="7.698650242s" podCreationTimestamp="2026-01-29 08:54:10 +0000 UTC" firstStartedPulling="2026-01-29 08:54:10.926266368 +0000 UTC m=+792.567774524" lastFinishedPulling="2026-01-29 08:54:17.022284862 +0000 UTC m=+798.663793018" observedRunningTime="2026-01-29 08:54:17.6948003 +0000 UTC m=+799.336308446" watchObservedRunningTime="2026-01-29 08:54:17.698650242 +0000 UTC m=+799.340158388" Jan 29 08:54:20 crc kubenswrapper[4895]: I0129 08:54:20.630492 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-62g2t" Jan 29 08:54:20 crc kubenswrapper[4895]: I0129 08:54:20.965499 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:20 crc kubenswrapper[4895]: I0129 08:54:20.966025 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:20 crc kubenswrapper[4895]: I0129 08:54:20.971032 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:21 crc kubenswrapper[4895]: I0129 08:54:21.712175 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5969ccc7b6-8r7pl" Jan 29 08:54:21 crc kubenswrapper[4895]: I0129 08:54:21.793042 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-z5sff"] Jan 29 08:54:31 crc kubenswrapper[4895]: I0129 08:54:31.184503 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-mgwfl" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.421244 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh"] Jan 29 08:54:45 crc kubenswrapper[4895]: E0129 08:54:45.422370 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ff3ca9-bfe3-43a1-ba39-63635ca905e3" containerName="extract-utilities" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.422391 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ff3ca9-bfe3-43a1-ba39-63635ca905e3" containerName="extract-utilities" Jan 29 08:54:45 crc kubenswrapper[4895]: E0129 08:54:45.422406 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ff3ca9-bfe3-43a1-ba39-63635ca905e3" containerName="registry-server" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.422416 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ff3ca9-bfe3-43a1-ba39-63635ca905e3" containerName="registry-server" Jan 29 08:54:45 crc kubenswrapper[4895]: E0129 08:54:45.422444 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ff3ca9-bfe3-43a1-ba39-63635ca905e3" containerName="extract-content" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.422453 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ff3ca9-bfe3-43a1-ba39-63635ca905e3" containerName="extract-content" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.422628 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="86ff3ca9-bfe3-43a1-ba39-63635ca905e3" containerName="registry-server" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.423750 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.433434 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.438985 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh"] Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.509442 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh\" (UID: \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.509613 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh\" (UID: \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.509672 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6bp7\" (UniqueName: \"kubernetes.io/projected/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-kube-api-access-v6bp7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh\" (UID: \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.610936 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh\" (UID: \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.611055 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh\" (UID: \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.611101 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6bp7\" (UniqueName: \"kubernetes.io/projected/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-kube-api-access-v6bp7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh\" (UID: \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.611802 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh\" (UID: \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.611813 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh\" (UID: \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.635474 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6bp7\" (UniqueName: \"kubernetes.io/projected/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-kube-api-access-v6bp7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh\" (UID: \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.741120 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" Jan 29 08:54:45 crc kubenswrapper[4895]: I0129 08:54:45.956048 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh"] Jan 29 08:54:46 crc kubenswrapper[4895]: I0129 08:54:46.845152 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-z5sff" podUID="ea9f8a45-3fdc-4780-a008-e0f77c99dffc" containerName="console" containerID="cri-o://e4a35276bc76fa5acc2bb2060b997831c90dfc02c2635c8d04c893775de420f2" gracePeriod=15 Jan 29 08:54:46 crc kubenswrapper[4895]: I0129 08:54:46.879476 4895 generic.go:334] "Generic (PLEG): container finished" podID="0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709" containerID="c795186407b01e8e265424911e3f6cdb3130be1e5100312578c78e4af437c3e4" exitCode=0 Jan 29 08:54:46 crc kubenswrapper[4895]: I0129 08:54:46.879524 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" event={"ID":"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709","Type":"ContainerDied","Data":"c795186407b01e8e265424911e3f6cdb3130be1e5100312578c78e4af437c3e4"} Jan 29 08:54:46 crc kubenswrapper[4895]: I0129 08:54:46.879555 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" event={"ID":"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709","Type":"ContainerStarted","Data":"2c606f464d45fdaa1094041080ec047aa2df3d92938b23dbfa6e39ec3d6ac648"} Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.330339 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-z5sff_ea9f8a45-3fdc-4780-a008-e0f77c99dffc/console/0.log" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.330419 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.343068 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-service-ca\") pod \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.343151 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bmk4\" (UniqueName: \"kubernetes.io/projected/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-kube-api-access-6bmk4\") pod \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.343227 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-config\") pod \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.343254 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-trusted-ca-bundle\") pod \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.343277 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-oauth-config\") pod \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.343312 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-serving-cert\") pod \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.343343 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-oauth-serving-cert\") pod \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\" (UID: \"ea9f8a45-3fdc-4780-a008-e0f77c99dffc\") " Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.344874 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ea9f8a45-3fdc-4780-a008-e0f77c99dffc" (UID: "ea9f8a45-3fdc-4780-a008-e0f77c99dffc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.344937 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ea9f8a45-3fdc-4780-a008-e0f77c99dffc" (UID: "ea9f8a45-3fdc-4780-a008-e0f77c99dffc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.345011 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-service-ca" (OuterVolumeSpecName: "service-ca") pod "ea9f8a45-3fdc-4780-a008-e0f77c99dffc" (UID: "ea9f8a45-3fdc-4780-a008-e0f77c99dffc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.345195 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-config" (OuterVolumeSpecName: "console-config") pod "ea9f8a45-3fdc-4780-a008-e0f77c99dffc" (UID: "ea9f8a45-3fdc-4780-a008-e0f77c99dffc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.355118 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ea9f8a45-3fdc-4780-a008-e0f77c99dffc" (UID: "ea9f8a45-3fdc-4780-a008-e0f77c99dffc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.356228 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-kube-api-access-6bmk4" (OuterVolumeSpecName: "kube-api-access-6bmk4") pod "ea9f8a45-3fdc-4780-a008-e0f77c99dffc" (UID: "ea9f8a45-3fdc-4780-a008-e0f77c99dffc"). InnerVolumeSpecName "kube-api-access-6bmk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.359367 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ea9f8a45-3fdc-4780-a008-e0f77c99dffc" (UID: "ea9f8a45-3fdc-4780-a008-e0f77c99dffc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.444579 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bmk4\" (UniqueName: \"kubernetes.io/projected/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-kube-api-access-6bmk4\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.444625 4895 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.444633 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.444641 4895 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.444649 4895 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.444658 4895 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.444666 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea9f8a45-3fdc-4780-a008-e0f77c99dffc-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.887030 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-z5sff_ea9f8a45-3fdc-4780-a008-e0f77c99dffc/console/0.log" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.887540 4895 generic.go:334] "Generic (PLEG): container finished" podID="ea9f8a45-3fdc-4780-a008-e0f77c99dffc" containerID="e4a35276bc76fa5acc2bb2060b997831c90dfc02c2635c8d04c893775de420f2" exitCode=2 Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.887582 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z5sff" event={"ID":"ea9f8a45-3fdc-4780-a008-e0f77c99dffc","Type":"ContainerDied","Data":"e4a35276bc76fa5acc2bb2060b997831c90dfc02c2635c8d04c893775de420f2"} Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.887613 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z5sff" event={"ID":"ea9f8a45-3fdc-4780-a008-e0f77c99dffc","Type":"ContainerDied","Data":"02d6a49f736b9f51cfbdc262c2f156da9cf9aab6f19e27a7056984ec562ba1af"} Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.887632 4895 scope.go:117] "RemoveContainer" containerID="e4a35276bc76fa5acc2bb2060b997831c90dfc02c2635c8d04c893775de420f2" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.887629 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z5sff" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.914771 4895 scope.go:117] "RemoveContainer" containerID="e4a35276bc76fa5acc2bb2060b997831c90dfc02c2635c8d04c893775de420f2" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.916966 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-z5sff"] Jan 29 08:54:47 crc kubenswrapper[4895]: E0129 08:54:47.919660 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4a35276bc76fa5acc2bb2060b997831c90dfc02c2635c8d04c893775de420f2\": container with ID starting with e4a35276bc76fa5acc2bb2060b997831c90dfc02c2635c8d04c893775de420f2 not found: ID does not exist" containerID="e4a35276bc76fa5acc2bb2060b997831c90dfc02c2635c8d04c893775de420f2" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.919702 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a35276bc76fa5acc2bb2060b997831c90dfc02c2635c8d04c893775de420f2"} err="failed to get container status \"e4a35276bc76fa5acc2bb2060b997831c90dfc02c2635c8d04c893775de420f2\": rpc error: code = NotFound desc = could not find container \"e4a35276bc76fa5acc2bb2060b997831c90dfc02c2635c8d04c893775de420f2\": container with ID starting with e4a35276bc76fa5acc2bb2060b997831c90dfc02c2635c8d04c893775de420f2 not found: ID does not exist" Jan 29 08:54:47 crc kubenswrapper[4895]: I0129 08:54:47.921295 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-z5sff"] Jan 29 08:54:48 crc kubenswrapper[4895]: I0129 08:54:48.896935 4895 generic.go:334] "Generic (PLEG): container finished" podID="0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709" containerID="77d9a08e48d08a4b33629b1c05e6615d993e97651fa8b91787d257a66d55e5fc" exitCode=0 Jan 29 08:54:48 crc kubenswrapper[4895]: I0129 08:54:48.897010 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" event={"ID":"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709","Type":"ContainerDied","Data":"77d9a08e48d08a4b33629b1c05e6615d993e97651fa8b91787d257a66d55e5fc"} Jan 29 08:54:49 crc kubenswrapper[4895]: I0129 08:54:49.220577 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea9f8a45-3fdc-4780-a008-e0f77c99dffc" path="/var/lib/kubelet/pods/ea9f8a45-3fdc-4780-a008-e0f77c99dffc/volumes" Jan 29 08:54:49 crc kubenswrapper[4895]: I0129 08:54:49.906474 4895 generic.go:334] "Generic (PLEG): container finished" podID="0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709" containerID="18524b4526215638b593261a5a3dba129eec8f31822a3c1fdcf215600ab50bea" exitCode=0 Jan 29 08:54:49 crc kubenswrapper[4895]: I0129 08:54:49.906529 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" event={"ID":"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709","Type":"ContainerDied","Data":"18524b4526215638b593261a5a3dba129eec8f31822a3c1fdcf215600ab50bea"} Jan 29 08:54:51 crc kubenswrapper[4895]: I0129 08:54:51.146873 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" Jan 29 08:54:51 crc kubenswrapper[4895]: I0129 08:54:51.311963 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6bp7\" (UniqueName: \"kubernetes.io/projected/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-kube-api-access-v6bp7\") pod \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\" (UID: \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\") " Jan 29 08:54:51 crc kubenswrapper[4895]: I0129 08:54:51.312034 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-bundle\") pod \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\" (UID: \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\") " Jan 29 08:54:51 crc kubenswrapper[4895]: I0129 08:54:51.312231 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-util\") pod \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\" (UID: \"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709\") " Jan 29 08:54:51 crc kubenswrapper[4895]: I0129 08:54:51.313812 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-bundle" (OuterVolumeSpecName: "bundle") pod "0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709" (UID: "0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:54:51 crc kubenswrapper[4895]: I0129 08:54:51.320399 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-kube-api-access-v6bp7" (OuterVolumeSpecName: "kube-api-access-v6bp7") pod "0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709" (UID: "0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709"). InnerVolumeSpecName "kube-api-access-v6bp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:54:51 crc kubenswrapper[4895]: I0129 08:54:51.414645 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6bp7\" (UniqueName: \"kubernetes.io/projected/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-kube-api-access-v6bp7\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:51 crc kubenswrapper[4895]: I0129 08:54:51.414698 4895 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:51 crc kubenswrapper[4895]: I0129 08:54:51.703588 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-util" (OuterVolumeSpecName: "util") pod "0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709" (UID: "0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:54:51 crc kubenswrapper[4895]: I0129 08:54:51.719756 4895 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709-util\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:51 crc kubenswrapper[4895]: I0129 08:54:51.923782 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" event={"ID":"0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709","Type":"ContainerDied","Data":"2c606f464d45fdaa1094041080ec047aa2df3d92938b23dbfa6e39ec3d6ac648"} Jan 29 08:54:51 crc kubenswrapper[4895]: I0129 08:54:51.923845 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c606f464d45fdaa1094041080ec047aa2df3d92938b23dbfa6e39ec3d6ac648" Jan 29 08:54:51 crc kubenswrapper[4895]: I0129 08:54:51.923978 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.862548 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl"] Jan 29 08:55:00 crc kubenswrapper[4895]: E0129 08:55:00.863652 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709" containerName="pull" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.863672 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709" containerName="pull" Jan 29 08:55:00 crc kubenswrapper[4895]: E0129 08:55:00.863687 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709" containerName="util" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.863696 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709" containerName="util" Jan 29 08:55:00 crc kubenswrapper[4895]: E0129 08:55:00.863713 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9f8a45-3fdc-4780-a008-e0f77c99dffc" containerName="console" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.863721 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9f8a45-3fdc-4780-a008-e0f77c99dffc" containerName="console" Jan 29 08:55:00 crc kubenswrapper[4895]: E0129 08:55:00.863731 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709" containerName="extract" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.863737 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709" containerName="extract" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.863880 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709" containerName="extract" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.863900 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea9f8a45-3fdc-4780-a008-e0f77c99dffc" containerName="console" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.864500 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.867198 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.867298 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.867417 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.867814 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.868352 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-n5sdm" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.884413 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl"] Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.984840 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d82c9dec-3917-4cb6-91f0-ee9b6ab253e7-apiservice-cert\") pod \"metallb-operator-controller-manager-6695fc676d-4fxsl\" (UID: \"d82c9dec-3917-4cb6-91f0-ee9b6ab253e7\") " pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.984970 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d82c9dec-3917-4cb6-91f0-ee9b6ab253e7-webhook-cert\") pod \"metallb-operator-controller-manager-6695fc676d-4fxsl\" (UID: \"d82c9dec-3917-4cb6-91f0-ee9b6ab253e7\") " pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" Jan 29 08:55:00 crc kubenswrapper[4895]: I0129 08:55:00.985023 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s2f8\" (UniqueName: \"kubernetes.io/projected/d82c9dec-3917-4cb6-91f0-ee9b6ab253e7-kube-api-access-9s2f8\") pod \"metallb-operator-controller-manager-6695fc676d-4fxsl\" (UID: \"d82c9dec-3917-4cb6-91f0-ee9b6ab253e7\") " pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.086653 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d82c9dec-3917-4cb6-91f0-ee9b6ab253e7-webhook-cert\") pod \"metallb-operator-controller-manager-6695fc676d-4fxsl\" (UID: \"d82c9dec-3917-4cb6-91f0-ee9b6ab253e7\") " pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.086728 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s2f8\" (UniqueName: \"kubernetes.io/projected/d82c9dec-3917-4cb6-91f0-ee9b6ab253e7-kube-api-access-9s2f8\") pod \"metallb-operator-controller-manager-6695fc676d-4fxsl\" (UID: \"d82c9dec-3917-4cb6-91f0-ee9b6ab253e7\") " pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.086817 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d82c9dec-3917-4cb6-91f0-ee9b6ab253e7-apiservice-cert\") pod \"metallb-operator-controller-manager-6695fc676d-4fxsl\" (UID: \"d82c9dec-3917-4cb6-91f0-ee9b6ab253e7\") " pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.098060 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d82c9dec-3917-4cb6-91f0-ee9b6ab253e7-apiservice-cert\") pod \"metallb-operator-controller-manager-6695fc676d-4fxsl\" (UID: \"d82c9dec-3917-4cb6-91f0-ee9b6ab253e7\") " pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.108518 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s2f8\" (UniqueName: \"kubernetes.io/projected/d82c9dec-3917-4cb6-91f0-ee9b6ab253e7-kube-api-access-9s2f8\") pod \"metallb-operator-controller-manager-6695fc676d-4fxsl\" (UID: \"d82c9dec-3917-4cb6-91f0-ee9b6ab253e7\") " pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.108564 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d82c9dec-3917-4cb6-91f0-ee9b6ab253e7-webhook-cert\") pod \"metallb-operator-controller-manager-6695fc676d-4fxsl\" (UID: \"d82c9dec-3917-4cb6-91f0-ee9b6ab253e7\") " pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.184196 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.315806 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz"] Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.317117 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.321137 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ts2sr" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.326123 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.328557 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.339292 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz"] Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.500962 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5af74c68-7b32-4db6-97b7-35cdcd2e9504-webhook-cert\") pod \"metallb-operator-webhook-server-659bffd789-lt6hz\" (UID: \"5af74c68-7b32-4db6-97b7-35cdcd2e9504\") " pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.501033 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6b2z\" (UniqueName: \"kubernetes.io/projected/5af74c68-7b32-4db6-97b7-35cdcd2e9504-kube-api-access-g6b2z\") pod \"metallb-operator-webhook-server-659bffd789-lt6hz\" (UID: \"5af74c68-7b32-4db6-97b7-35cdcd2e9504\") " pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.501062 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5af74c68-7b32-4db6-97b7-35cdcd2e9504-apiservice-cert\") pod \"metallb-operator-webhook-server-659bffd789-lt6hz\" (UID: \"5af74c68-7b32-4db6-97b7-35cdcd2e9504\") " pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.530095 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl"] Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.602649 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5af74c68-7b32-4db6-97b7-35cdcd2e9504-webhook-cert\") pod \"metallb-operator-webhook-server-659bffd789-lt6hz\" (UID: \"5af74c68-7b32-4db6-97b7-35cdcd2e9504\") " pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.602709 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6b2z\" (UniqueName: \"kubernetes.io/projected/5af74c68-7b32-4db6-97b7-35cdcd2e9504-kube-api-access-g6b2z\") pod \"metallb-operator-webhook-server-659bffd789-lt6hz\" (UID: \"5af74c68-7b32-4db6-97b7-35cdcd2e9504\") " pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.602737 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5af74c68-7b32-4db6-97b7-35cdcd2e9504-apiservice-cert\") pod \"metallb-operator-webhook-server-659bffd789-lt6hz\" (UID: \"5af74c68-7b32-4db6-97b7-35cdcd2e9504\") " pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.610021 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5af74c68-7b32-4db6-97b7-35cdcd2e9504-apiservice-cert\") pod \"metallb-operator-webhook-server-659bffd789-lt6hz\" (UID: \"5af74c68-7b32-4db6-97b7-35cdcd2e9504\") " pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.610145 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5af74c68-7b32-4db6-97b7-35cdcd2e9504-webhook-cert\") pod \"metallb-operator-webhook-server-659bffd789-lt6hz\" (UID: \"5af74c68-7b32-4db6-97b7-35cdcd2e9504\") " pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.620417 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6b2z\" (UniqueName: \"kubernetes.io/projected/5af74c68-7b32-4db6-97b7-35cdcd2e9504-kube-api-access-g6b2z\") pod \"metallb-operator-webhook-server-659bffd789-lt6hz\" (UID: \"5af74c68-7b32-4db6-97b7-35cdcd2e9504\") " pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.634435 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.845394 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz"] Jan 29 08:55:01 crc kubenswrapper[4895]: W0129 08:55:01.851842 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5af74c68_7b32_4db6_97b7_35cdcd2e9504.slice/crio-18447b0ae271e2204b25bd9ecfd3a99b891e850fa40fcac7f8a93499fb39f551 WatchSource:0}: Error finding container 18447b0ae271e2204b25bd9ecfd3a99b891e850fa40fcac7f8a93499fb39f551: Status 404 returned error can't find the container with id 18447b0ae271e2204b25bd9ecfd3a99b891e850fa40fcac7f8a93499fb39f551 Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.990857 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" event={"ID":"d82c9dec-3917-4cb6-91f0-ee9b6ab253e7","Type":"ContainerStarted","Data":"c69f58d87d4dcfecdb36c193f0f2d54612fdf683bc7f562be8532f7e87004894"} Jan 29 08:55:01 crc kubenswrapper[4895]: I0129 08:55:01.992178 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" event={"ID":"5af74c68-7b32-4db6-97b7-35cdcd2e9504","Type":"ContainerStarted","Data":"18447b0ae271e2204b25bd9ecfd3a99b891e850fa40fcac7f8a93499fb39f551"} Jan 29 08:55:05 crc kubenswrapper[4895]: I0129 08:55:05.015523 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" event={"ID":"d82c9dec-3917-4cb6-91f0-ee9b6ab253e7","Type":"ContainerStarted","Data":"4770f68af2e8693c4dc27337df1d81e0f6b270ccd22333f0755b2fd31a370fb5"} Jan 29 08:55:05 crc kubenswrapper[4895]: I0129 08:55:05.015993 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" Jan 29 08:55:05 crc kubenswrapper[4895]: I0129 08:55:05.052814 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" podStartSLOduration=1.8234757190000002 podStartE2EDuration="5.052784861s" podCreationTimestamp="2026-01-29 08:55:00 +0000 UTC" firstStartedPulling="2026-01-29 08:55:01.546185942 +0000 UTC m=+843.187694078" lastFinishedPulling="2026-01-29 08:55:04.775495074 +0000 UTC m=+846.417003220" observedRunningTime="2026-01-29 08:55:05.04337451 +0000 UTC m=+846.684882666" watchObservedRunningTime="2026-01-29 08:55:05.052784861 +0000 UTC m=+846.694293007" Jan 29 08:55:09 crc kubenswrapper[4895]: I0129 08:55:09.043734 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" event={"ID":"5af74c68-7b32-4db6-97b7-35cdcd2e9504","Type":"ContainerStarted","Data":"aa8ee07d5e8288ec883ac56e66a50514d35bf4a9f1a421a929ff8e9db6aac8fb"} Jan 29 08:55:09 crc kubenswrapper[4895]: I0129 08:55:09.044181 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" Jan 29 08:55:09 crc kubenswrapper[4895]: I0129 08:55:09.064714 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" podStartSLOduration=1.9804058740000001 podStartE2EDuration="8.064691039s" podCreationTimestamp="2026-01-29 08:55:01 +0000 UTC" firstStartedPulling="2026-01-29 08:55:01.856308995 +0000 UTC m=+843.497817141" lastFinishedPulling="2026-01-29 08:55:07.94059416 +0000 UTC m=+849.582102306" observedRunningTime="2026-01-29 08:55:09.063161928 +0000 UTC m=+850.704670074" watchObservedRunningTime="2026-01-29 08:55:09.064691039 +0000 UTC m=+850.706199185" Jan 29 08:55:21 crc kubenswrapper[4895]: I0129 08:55:21.646126 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-659bffd789-lt6hz" Jan 29 08:55:41 crc kubenswrapper[4895]: I0129 08:55:41.188437 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6695fc676d-4fxsl" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.022590 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fhh6k"] Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.025721 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.029021 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.029171 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.029384 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-qhg25" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.040816 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg"] Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.041905 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.044677 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.062860 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg"] Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.139084 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-vpgqh"] Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.140576 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vpgqh" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.144590 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-lvr5t" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.146272 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.146279 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.146424 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.174800 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s28wq\" (UniqueName: \"kubernetes.io/projected/62a11870-51ea-475c-82c9-e8db645c1284-kube-api-access-s28wq\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.174889 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/622e6489-4886-4658-b155-3c0d9cf63fbb-memberlist\") pod \"speaker-vpgqh\" (UID: \"622e6489-4886-4658-b155-3c0d9cf63fbb\") " pod="metallb-system/speaker-vpgqh" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.174947 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/62a11870-51ea-475c-82c9-e8db645c1284-reloader\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.174978 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/62a11870-51ea-475c-82c9-e8db645c1284-frr-sockets\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.175005 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klbwc\" (UniqueName: \"kubernetes.io/projected/622e6489-4886-4658-b155-3c0d9cf63fbb-kube-api-access-klbwc\") pod \"speaker-vpgqh\" (UID: \"622e6489-4886-4658-b155-3c0d9cf63fbb\") " pod="metallb-system/speaker-vpgqh" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.175057 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/62a11870-51ea-475c-82c9-e8db645c1284-frr-startup\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.175153 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/62a11870-51ea-475c-82c9-e8db645c1284-metrics\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.175211 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/62a11870-51ea-475c-82c9-e8db645c1284-frr-conf\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.175251 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpf6l\" (UniqueName: \"kubernetes.io/projected/5d4d4832-512a-4d5c-b6ea-8a90b2ad3297-kube-api-access-vpf6l\") pod \"frr-k8s-webhook-server-7df86c4f6c-jm6jg\" (UID: \"5d4d4832-512a-4d5c-b6ea-8a90b2ad3297\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.175285 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/622e6489-4886-4658-b155-3c0d9cf63fbb-metallb-excludel2\") pod \"speaker-vpgqh\" (UID: \"622e6489-4886-4658-b155-3c0d9cf63fbb\") " pod="metallb-system/speaker-vpgqh" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.175375 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62a11870-51ea-475c-82c9-e8db645c1284-metrics-certs\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.175400 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d4d4832-512a-4d5c-b6ea-8a90b2ad3297-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jm6jg\" (UID: \"5d4d4832-512a-4d5c-b6ea-8a90b2ad3297\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.175426 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/622e6489-4886-4658-b155-3c0d9cf63fbb-metrics-certs\") pod \"speaker-vpgqh\" (UID: \"622e6489-4886-4658-b155-3c0d9cf63fbb\") " pod="metallb-system/speaker-vpgqh" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.175935 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-68xht"] Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.176989 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-68xht" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.179353 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.189310 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-68xht"] Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276031 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwr22\" (UniqueName: \"kubernetes.io/projected/3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f-kube-api-access-dwr22\") pod \"controller-6968d8fdc4-68xht\" (UID: \"3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f\") " pod="metallb-system/controller-6968d8fdc4-68xht" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276114 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/62a11870-51ea-475c-82c9-e8db645c1284-reloader\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276135 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/622e6489-4886-4658-b155-3c0d9cf63fbb-memberlist\") pod \"speaker-vpgqh\" (UID: \"622e6489-4886-4658-b155-3c0d9cf63fbb\") " pod="metallb-system/speaker-vpgqh" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276161 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f-cert\") pod \"controller-6968d8fdc4-68xht\" (UID: \"3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f\") " pod="metallb-system/controller-6968d8fdc4-68xht" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276189 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/62a11870-51ea-475c-82c9-e8db645c1284-frr-sockets\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276220 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klbwc\" (UniqueName: \"kubernetes.io/projected/622e6489-4886-4658-b155-3c0d9cf63fbb-kube-api-access-klbwc\") pod \"speaker-vpgqh\" (UID: \"622e6489-4886-4658-b155-3c0d9cf63fbb\") " pod="metallb-system/speaker-vpgqh" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276243 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/62a11870-51ea-475c-82c9-e8db645c1284-frr-startup\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276288 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/62a11870-51ea-475c-82c9-e8db645c1284-metrics\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276307 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/62a11870-51ea-475c-82c9-e8db645c1284-frr-conf\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276329 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpf6l\" (UniqueName: \"kubernetes.io/projected/5d4d4832-512a-4d5c-b6ea-8a90b2ad3297-kube-api-access-vpf6l\") pod \"frr-k8s-webhook-server-7df86c4f6c-jm6jg\" (UID: \"5d4d4832-512a-4d5c-b6ea-8a90b2ad3297\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276353 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/622e6489-4886-4658-b155-3c0d9cf63fbb-metallb-excludel2\") pod \"speaker-vpgqh\" (UID: \"622e6489-4886-4658-b155-3c0d9cf63fbb\") " pod="metallb-system/speaker-vpgqh" Jan 29 08:55:42 crc kubenswrapper[4895]: E0129 08:55:42.276357 4895 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 08:55:42 crc kubenswrapper[4895]: E0129 08:55:42.276459 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/622e6489-4886-4658-b155-3c0d9cf63fbb-memberlist podName:622e6489-4886-4658-b155-3c0d9cf63fbb nodeName:}" failed. No retries permitted until 2026-01-29 08:55:42.776435058 +0000 UTC m=+884.417943204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/622e6489-4886-4658-b155-3c0d9cf63fbb-memberlist") pod "speaker-vpgqh" (UID: "622e6489-4886-4658-b155-3c0d9cf63fbb") : secret "metallb-memberlist" not found Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276384 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62a11870-51ea-475c-82c9-e8db645c1284-metrics-certs\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276753 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d4d4832-512a-4d5c-b6ea-8a90b2ad3297-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jm6jg\" (UID: \"5d4d4832-512a-4d5c-b6ea-8a90b2ad3297\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276794 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/622e6489-4886-4658-b155-3c0d9cf63fbb-metrics-certs\") pod \"speaker-vpgqh\" (UID: \"622e6489-4886-4658-b155-3c0d9cf63fbb\") " pod="metallb-system/speaker-vpgqh" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276828 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/62a11870-51ea-475c-82c9-e8db645c1284-frr-sockets\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276873 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s28wq\" (UniqueName: \"kubernetes.io/projected/62a11870-51ea-475c-82c9-e8db645c1284-kube-api-access-s28wq\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.277058 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f-metrics-certs\") pod \"controller-6968d8fdc4-68xht\" (UID: \"3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f\") " pod="metallb-system/controller-6968d8fdc4-68xht" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.277238 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/62a11870-51ea-475c-82c9-e8db645c1284-frr-conf\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.276789 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/62a11870-51ea-475c-82c9-e8db645c1284-reloader\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.277512 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/62a11870-51ea-475c-82c9-e8db645c1284-metrics\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.278048 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/62a11870-51ea-475c-82c9-e8db645c1284-frr-startup\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.278125 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/622e6489-4886-4658-b155-3c0d9cf63fbb-metallb-excludel2\") pod \"speaker-vpgqh\" (UID: \"622e6489-4886-4658-b155-3c0d9cf63fbb\") " pod="metallb-system/speaker-vpgqh" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.286038 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/622e6489-4886-4658-b155-3c0d9cf63fbb-metrics-certs\") pod \"speaker-vpgqh\" (UID: \"622e6489-4886-4658-b155-3c0d9cf63fbb\") " pod="metallb-system/speaker-vpgqh" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.286139 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d4d4832-512a-4d5c-b6ea-8a90b2ad3297-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jm6jg\" (UID: \"5d4d4832-512a-4d5c-b6ea-8a90b2ad3297\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.293864 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62a11870-51ea-475c-82c9-e8db645c1284-metrics-certs\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.299000 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klbwc\" (UniqueName: \"kubernetes.io/projected/622e6489-4886-4658-b155-3c0d9cf63fbb-kube-api-access-klbwc\") pod \"speaker-vpgqh\" (UID: \"622e6489-4886-4658-b155-3c0d9cf63fbb\") " pod="metallb-system/speaker-vpgqh" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.299132 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s28wq\" (UniqueName: \"kubernetes.io/projected/62a11870-51ea-475c-82c9-e8db645c1284-kube-api-access-s28wq\") pod \"frr-k8s-fhh6k\" (UID: \"62a11870-51ea-475c-82c9-e8db645c1284\") " pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.299737 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpf6l\" (UniqueName: \"kubernetes.io/projected/5d4d4832-512a-4d5c-b6ea-8a90b2ad3297-kube-api-access-vpf6l\") pod \"frr-k8s-webhook-server-7df86c4f6c-jm6jg\" (UID: \"5d4d4832-512a-4d5c-b6ea-8a90b2ad3297\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.342891 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.354829 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.379314 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f-metrics-certs\") pod \"controller-6968d8fdc4-68xht\" (UID: \"3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f\") " pod="metallb-system/controller-6968d8fdc4-68xht" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.379407 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwr22\" (UniqueName: \"kubernetes.io/projected/3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f-kube-api-access-dwr22\") pod \"controller-6968d8fdc4-68xht\" (UID: \"3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f\") " pod="metallb-system/controller-6968d8fdc4-68xht" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.379499 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f-cert\") pod \"controller-6968d8fdc4-68xht\" (UID: \"3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f\") " pod="metallb-system/controller-6968d8fdc4-68xht" Jan 29 08:55:42 crc kubenswrapper[4895]: E0129 08:55:42.379506 4895 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 29 08:55:42 crc kubenswrapper[4895]: E0129 08:55:42.379606 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f-metrics-certs podName:3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f nodeName:}" failed. No retries permitted until 2026-01-29 08:55:42.879576109 +0000 UTC m=+884.521084255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f-metrics-certs") pod "controller-6968d8fdc4-68xht" (UID: "3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f") : secret "controller-certs-secret" not found Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.384901 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f-cert\") pod \"controller-6968d8fdc4-68xht\" (UID: \"3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f\") " pod="metallb-system/controller-6968d8fdc4-68xht" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.409503 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwr22\" (UniqueName: \"kubernetes.io/projected/3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f-kube-api-access-dwr22\") pod \"controller-6968d8fdc4-68xht\" (UID: \"3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f\") " pod="metallb-system/controller-6968d8fdc4-68xht" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.787183 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/622e6489-4886-4658-b155-3c0d9cf63fbb-memberlist\") pod \"speaker-vpgqh\" (UID: \"622e6489-4886-4658-b155-3c0d9cf63fbb\") " pod="metallb-system/speaker-vpgqh" Jan 29 08:55:42 crc kubenswrapper[4895]: E0129 08:55:42.787907 4895 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 08:55:42 crc kubenswrapper[4895]: E0129 08:55:42.787999 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/622e6489-4886-4658-b155-3c0d9cf63fbb-memberlist podName:622e6489-4886-4658-b155-3c0d9cf63fbb nodeName:}" failed. No retries permitted until 2026-01-29 08:55:43.787977524 +0000 UTC m=+885.429485670 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/622e6489-4886-4658-b155-3c0d9cf63fbb-memberlist") pod "speaker-vpgqh" (UID: "622e6489-4886-4658-b155-3c0d9cf63fbb") : secret "metallb-memberlist" not found Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.889456 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f-metrics-certs\") pod \"controller-6968d8fdc4-68xht\" (UID: \"3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f\") " pod="metallb-system/controller-6968d8fdc4-68xht" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.898429 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f-metrics-certs\") pod \"controller-6968d8fdc4-68xht\" (UID: \"3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f\") " pod="metallb-system/controller-6968d8fdc4-68xht" Jan 29 08:55:42 crc kubenswrapper[4895]: I0129 08:55:42.940159 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg"] Jan 29 08:55:43 crc kubenswrapper[4895]: I0129 08:55:43.092690 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-68xht" Jan 29 08:55:43 crc kubenswrapper[4895]: I0129 08:55:43.303672 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg" event={"ID":"5d4d4832-512a-4d5c-b6ea-8a90b2ad3297","Type":"ContainerStarted","Data":"438e019018272a5b2d2229165c8e0f02e544b157126b37241a19ac7f377221fa"} Jan 29 08:55:43 crc kubenswrapper[4895]: I0129 08:55:43.305754 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhh6k" event={"ID":"62a11870-51ea-475c-82c9-e8db645c1284","Type":"ContainerStarted","Data":"d9d6122c2f088666d65eeaa87e2a8bc58cb00595049e115af9794261eaba4c80"} Jan 29 08:55:43 crc kubenswrapper[4895]: I0129 08:55:43.345072 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-68xht"] Jan 29 08:55:43 crc kubenswrapper[4895]: W0129 08:55:43.350362 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ea4f51f_f0bd_408c_8fbe_d38e86e52f2f.slice/crio-9f76ffe71b532d95ae21803d820314f7eab95a14fea48fdbd62c9356c168a0e0 WatchSource:0}: Error finding container 9f76ffe71b532d95ae21803d820314f7eab95a14fea48fdbd62c9356c168a0e0: Status 404 returned error can't find the container with id 9f76ffe71b532d95ae21803d820314f7eab95a14fea48fdbd62c9356c168a0e0 Jan 29 08:55:43 crc kubenswrapper[4895]: I0129 08:55:43.803128 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/622e6489-4886-4658-b155-3c0d9cf63fbb-memberlist\") pod \"speaker-vpgqh\" (UID: \"622e6489-4886-4658-b155-3c0d9cf63fbb\") " pod="metallb-system/speaker-vpgqh" Jan 29 08:55:43 crc kubenswrapper[4895]: I0129 08:55:43.809775 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/622e6489-4886-4658-b155-3c0d9cf63fbb-memberlist\") pod \"speaker-vpgqh\" (UID: \"622e6489-4886-4658-b155-3c0d9cf63fbb\") " pod="metallb-system/speaker-vpgqh" Jan 29 08:55:43 crc kubenswrapper[4895]: I0129 08:55:43.955559 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vpgqh" Jan 29 08:55:43 crc kubenswrapper[4895]: W0129 08:55:43.979354 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod622e6489_4886_4658_b155_3c0d9cf63fbb.slice/crio-d4b6aab2ea6c8f11858d375ed44f6dcd476dc559a963ba4fd0ece1a1af6266e9 WatchSource:0}: Error finding container d4b6aab2ea6c8f11858d375ed44f6dcd476dc559a963ba4fd0ece1a1af6266e9: Status 404 returned error can't find the container with id d4b6aab2ea6c8f11858d375ed44f6dcd476dc559a963ba4fd0ece1a1af6266e9 Jan 29 08:55:44 crc kubenswrapper[4895]: I0129 08:55:44.318168 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vpgqh" event={"ID":"622e6489-4886-4658-b155-3c0d9cf63fbb","Type":"ContainerStarted","Data":"ba97b7822234a5aa2287983330f4f323125397f176c707301d6123f99d7b043c"} Jan 29 08:55:44 crc kubenswrapper[4895]: I0129 08:55:44.318244 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vpgqh" event={"ID":"622e6489-4886-4658-b155-3c0d9cf63fbb","Type":"ContainerStarted","Data":"d4b6aab2ea6c8f11858d375ed44f6dcd476dc559a963ba4fd0ece1a1af6266e9"} Jan 29 08:55:44 crc kubenswrapper[4895]: I0129 08:55:44.320379 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-68xht" event={"ID":"3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f","Type":"ContainerStarted","Data":"5d571282d19c9289a809485284008fa11f729ed3ccdec9ef8db08febd7fe75de"} Jan 29 08:55:44 crc kubenswrapper[4895]: I0129 08:55:44.320402 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-68xht" event={"ID":"3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f","Type":"ContainerStarted","Data":"bdc6ea319543b8bc3a924194468493eb8b8e545a4eff27645430a40b9a2aeb13"} Jan 29 08:55:44 crc kubenswrapper[4895]: I0129 08:55:44.320412 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-68xht" event={"ID":"3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f","Type":"ContainerStarted","Data":"9f76ffe71b532d95ae21803d820314f7eab95a14fea48fdbd62c9356c168a0e0"} Jan 29 08:55:44 crc kubenswrapper[4895]: I0129 08:55:44.322097 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-68xht" Jan 29 08:55:44 crc kubenswrapper[4895]: I0129 08:55:44.341206 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-68xht" podStartSLOduration=2.34118418 podStartE2EDuration="2.34118418s" podCreationTimestamp="2026-01-29 08:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:55:44.33779458 +0000 UTC m=+885.979302746" watchObservedRunningTime="2026-01-29 08:55:44.34118418 +0000 UTC m=+885.982692326" Jan 29 08:55:45 crc kubenswrapper[4895]: I0129 08:55:45.338622 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vpgqh" event={"ID":"622e6489-4886-4658-b155-3c0d9cf63fbb","Type":"ContainerStarted","Data":"d8b29eac2ef96ac36f5cc9296e529dbf07bf1a2453fb6836c5ca479663a5f583"} Jan 29 08:55:45 crc kubenswrapper[4895]: I0129 08:55:45.339118 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-vpgqh" Jan 29 08:55:45 crc kubenswrapper[4895]: I0129 08:55:45.366416 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-vpgqh" podStartSLOduration=3.366394859 podStartE2EDuration="3.366394859s" podCreationTimestamp="2026-01-29 08:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:55:45.361706975 +0000 UTC m=+887.003215121" watchObservedRunningTime="2026-01-29 08:55:45.366394859 +0000 UTC m=+887.007903005" Jan 29 08:55:53 crc kubenswrapper[4895]: I0129 08:55:53.102394 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-68xht" Jan 29 08:55:54 crc kubenswrapper[4895]: I0129 08:55:54.567993 4895 generic.go:334] "Generic (PLEG): container finished" podID="62a11870-51ea-475c-82c9-e8db645c1284" containerID="073b60515ab0717f778de98c343fec373c07cc094a9094858ddd07271377cb72" exitCode=0 Jan 29 08:55:54 crc kubenswrapper[4895]: I0129 08:55:54.568331 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhh6k" event={"ID":"62a11870-51ea-475c-82c9-e8db645c1284","Type":"ContainerDied","Data":"073b60515ab0717f778de98c343fec373c07cc094a9094858ddd07271377cb72"} Jan 29 08:55:54 crc kubenswrapper[4895]: I0129 08:55:54.571670 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg" event={"ID":"5d4d4832-512a-4d5c-b6ea-8a90b2ad3297","Type":"ContainerStarted","Data":"f9685a3f265b024afb5d3547115b0c8f5a0a06c1a8161baa876b1ceba6ce2c22"} Jan 29 08:55:54 crc kubenswrapper[4895]: I0129 08:55:54.572024 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg" Jan 29 08:55:54 crc kubenswrapper[4895]: I0129 08:55:54.614750 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg" podStartSLOduration=2.104141096 podStartE2EDuration="12.614716873s" podCreationTimestamp="2026-01-29 08:55:42 +0000 UTC" firstStartedPulling="2026-01-29 08:55:42.951847016 +0000 UTC m=+884.593355162" lastFinishedPulling="2026-01-29 08:55:53.462422793 +0000 UTC m=+895.103930939" observedRunningTime="2026-01-29 08:55:54.61276369 +0000 UTC m=+896.254271836" watchObservedRunningTime="2026-01-29 08:55:54.614716873 +0000 UTC m=+896.256225029" Jan 29 08:55:55 crc kubenswrapper[4895]: I0129 08:55:55.581503 4895 generic.go:334] "Generic (PLEG): container finished" podID="62a11870-51ea-475c-82c9-e8db645c1284" containerID="c55d0ed913913c850421aa39447d5bfc8aff569d330780c8ee35e833bc87b7fe" exitCode=0 Jan 29 08:55:55 crc kubenswrapper[4895]: I0129 08:55:55.581636 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhh6k" event={"ID":"62a11870-51ea-475c-82c9-e8db645c1284","Type":"ContainerDied","Data":"c55d0ed913913c850421aa39447d5bfc8aff569d330780c8ee35e833bc87b7fe"} Jan 29 08:55:56 crc kubenswrapper[4895]: I0129 08:55:56.590361 4895 generic.go:334] "Generic (PLEG): container finished" podID="62a11870-51ea-475c-82c9-e8db645c1284" containerID="a640d139b14c36760c36592b812c0ee38983a6537519b283413a2dd55d063685" exitCode=0 Jan 29 08:55:56 crc kubenswrapper[4895]: I0129 08:55:56.590424 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhh6k" event={"ID":"62a11870-51ea-475c-82c9-e8db645c1284","Type":"ContainerDied","Data":"a640d139b14c36760c36592b812c0ee38983a6537519b283413a2dd55d063685"} Jan 29 08:55:57 crc kubenswrapper[4895]: I0129 08:55:57.599800 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhh6k" event={"ID":"62a11870-51ea-475c-82c9-e8db645c1284","Type":"ContainerStarted","Data":"8fa90d762c382c6208f809df6d07949dcb1441c04c3e74e4751e13e54c4e614b"} Jan 29 08:55:57 crc kubenswrapper[4895]: I0129 08:55:57.600285 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhh6k" event={"ID":"62a11870-51ea-475c-82c9-e8db645c1284","Type":"ContainerStarted","Data":"415f15df8ad7535799cf5cd51a51b0c90ba36701f372f2d321a1245f05bf3b4f"} Jan 29 08:55:57 crc kubenswrapper[4895]: I0129 08:55:57.600301 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhh6k" event={"ID":"62a11870-51ea-475c-82c9-e8db645c1284","Type":"ContainerStarted","Data":"c0760e0d863cb3087c8ec77d6af7e8167ed61b7ec78f05a35956d6e6afb58aea"} Jan 29 08:55:57 crc kubenswrapper[4895]: I0129 08:55:57.600312 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhh6k" event={"ID":"62a11870-51ea-475c-82c9-e8db645c1284","Type":"ContainerStarted","Data":"933242aa8407100ff1b319fe162a508e8da68207fca4a9bb1cced22a4a62d574"} Jan 29 08:55:57 crc kubenswrapper[4895]: I0129 08:55:57.600323 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhh6k" event={"ID":"62a11870-51ea-475c-82c9-e8db645c1284","Type":"ContainerStarted","Data":"e3e7e637fdd38ffd4e944106a2babd896114a8a52784bea4bacd549e139b8f04"} Jan 29 08:55:58 crc kubenswrapper[4895]: I0129 08:55:58.611357 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fhh6k" event={"ID":"62a11870-51ea-475c-82c9-e8db645c1284","Type":"ContainerStarted","Data":"8e695782095974d3bc824773601b4634cdc24b4f2d1d5e8f4943efb5ca8449ad"} Jan 29 08:55:58 crc kubenswrapper[4895]: I0129 08:55:58.611577 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:55:58 crc kubenswrapper[4895]: I0129 08:55:58.643620 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fhh6k" podStartSLOduration=6.179883047 podStartE2EDuration="16.643598223s" podCreationTimestamp="2026-01-29 08:55:42 +0000 UTC" firstStartedPulling="2026-01-29 08:55:42.980783038 +0000 UTC m=+884.622291184" lastFinishedPulling="2026-01-29 08:55:53.444498214 +0000 UTC m=+895.086006360" observedRunningTime="2026-01-29 08:55:58.639736779 +0000 UTC m=+900.281244955" watchObservedRunningTime="2026-01-29 08:55:58.643598223 +0000 UTC m=+900.285106369" Jan 29 08:56:02 crc kubenswrapper[4895]: I0129 08:56:02.343651 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:56:02 crc kubenswrapper[4895]: I0129 08:56:02.383727 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:56:03 crc kubenswrapper[4895]: I0129 08:56:03.968157 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-vpgqh" Jan 29 08:56:07 crc kubenswrapper[4895]: I0129 08:56:07.110780 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-289ll"] Jan 29 08:56:07 crc kubenswrapper[4895]: I0129 08:56:07.111683 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-289ll" Jan 29 08:56:07 crc kubenswrapper[4895]: I0129 08:56:07.114071 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8khph" Jan 29 08:56:07 crc kubenswrapper[4895]: I0129 08:56:07.114667 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 08:56:07 crc kubenswrapper[4895]: I0129 08:56:07.115497 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 08:56:07 crc kubenswrapper[4895]: I0129 08:56:07.146815 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-289ll"] Jan 29 08:56:07 crc kubenswrapper[4895]: I0129 08:56:07.213990 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79kcl\" (UniqueName: \"kubernetes.io/projected/263dbd91-db9c-477a-a06c-d9c084bd3693-kube-api-access-79kcl\") pod \"openstack-operator-index-289ll\" (UID: \"263dbd91-db9c-477a-a06c-d9c084bd3693\") " pod="openstack-operators/openstack-operator-index-289ll" Jan 29 08:56:07 crc kubenswrapper[4895]: I0129 08:56:07.316634 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79kcl\" (UniqueName: \"kubernetes.io/projected/263dbd91-db9c-477a-a06c-d9c084bd3693-kube-api-access-79kcl\") pod \"openstack-operator-index-289ll\" (UID: \"263dbd91-db9c-477a-a06c-d9c084bd3693\") " pod="openstack-operators/openstack-operator-index-289ll" Jan 29 08:56:07 crc kubenswrapper[4895]: I0129 08:56:07.338531 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79kcl\" (UniqueName: \"kubernetes.io/projected/263dbd91-db9c-477a-a06c-d9c084bd3693-kube-api-access-79kcl\") pod \"openstack-operator-index-289ll\" (UID: \"263dbd91-db9c-477a-a06c-d9c084bd3693\") " pod="openstack-operators/openstack-operator-index-289ll" Jan 29 08:56:07 crc kubenswrapper[4895]: I0129 08:56:07.437288 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-289ll" Jan 29 08:56:07 crc kubenswrapper[4895]: I0129 08:56:07.879280 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-289ll"] Jan 29 08:56:07 crc kubenswrapper[4895]: W0129 08:56:07.881260 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod263dbd91_db9c_477a_a06c_d9c084bd3693.slice/crio-b37bb8af0492f29a2dda7045fbbb31fc2f03283e2eacc248acdac171ec96c5a3 WatchSource:0}: Error finding container b37bb8af0492f29a2dda7045fbbb31fc2f03283e2eacc248acdac171ec96c5a3: Status 404 returned error can't find the container with id b37bb8af0492f29a2dda7045fbbb31fc2f03283e2eacc248acdac171ec96c5a3 Jan 29 08:56:08 crc kubenswrapper[4895]: I0129 08:56:08.678824 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-289ll" event={"ID":"263dbd91-db9c-477a-a06c-d9c084bd3693","Type":"ContainerStarted","Data":"b37bb8af0492f29a2dda7045fbbb31fc2f03283e2eacc248acdac171ec96c5a3"} Jan 29 08:56:10 crc kubenswrapper[4895]: I0129 08:56:10.284505 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-289ll"] Jan 29 08:56:10 crc kubenswrapper[4895]: I0129 08:56:10.889632 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-56sqg"] Jan 29 08:56:10 crc kubenswrapper[4895]: I0129 08:56:10.890756 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-56sqg" Jan 29 08:56:10 crc kubenswrapper[4895]: I0129 08:56:10.902907 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-56sqg"] Jan 29 08:56:11 crc kubenswrapper[4895]: I0129 08:56:11.075747 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkd8z\" (UniqueName: \"kubernetes.io/projected/a833ad23-634a-4270-a6aa-267480e7bb2a-kube-api-access-vkd8z\") pod \"openstack-operator-index-56sqg\" (UID: \"a833ad23-634a-4270-a6aa-267480e7bb2a\") " pod="openstack-operators/openstack-operator-index-56sqg" Jan 29 08:56:11 crc kubenswrapper[4895]: I0129 08:56:11.177404 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkd8z\" (UniqueName: \"kubernetes.io/projected/a833ad23-634a-4270-a6aa-267480e7bb2a-kube-api-access-vkd8z\") pod \"openstack-operator-index-56sqg\" (UID: \"a833ad23-634a-4270-a6aa-267480e7bb2a\") " pod="openstack-operators/openstack-operator-index-56sqg" Jan 29 08:56:11 crc kubenswrapper[4895]: I0129 08:56:11.202724 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkd8z\" (UniqueName: \"kubernetes.io/projected/a833ad23-634a-4270-a6aa-267480e7bb2a-kube-api-access-vkd8z\") pod \"openstack-operator-index-56sqg\" (UID: \"a833ad23-634a-4270-a6aa-267480e7bb2a\") " pod="openstack-operators/openstack-operator-index-56sqg" Jan 29 08:56:11 crc kubenswrapper[4895]: I0129 08:56:11.219192 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-56sqg" Jan 29 08:56:12 crc kubenswrapper[4895]: I0129 08:56:12.352007 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fhh6k" Jan 29 08:56:12 crc kubenswrapper[4895]: I0129 08:56:12.371935 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jm6jg" Jan 29 08:56:12 crc kubenswrapper[4895]: I0129 08:56:12.689433 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-56sqg"] Jan 29 08:56:12 crc kubenswrapper[4895]: W0129 08:56:12.693726 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda833ad23_634a_4270_a6aa_267480e7bb2a.slice/crio-486c6d82110de3b13e8f85cebc2f2b0fab79538220611c146d38d49adfebf24a WatchSource:0}: Error finding container 486c6d82110de3b13e8f85cebc2f2b0fab79538220611c146d38d49adfebf24a: Status 404 returned error can't find the container with id 486c6d82110de3b13e8f85cebc2f2b0fab79538220611c146d38d49adfebf24a Jan 29 08:56:12 crc kubenswrapper[4895]: I0129 08:56:12.710688 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-289ll" event={"ID":"263dbd91-db9c-477a-a06c-d9c084bd3693","Type":"ContainerStarted","Data":"22fda80a08c4c8d2371ac214e865508d651b2672fa8dfd86e17f756072db6bde"} Jan 29 08:56:12 crc kubenswrapper[4895]: I0129 08:56:12.710828 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-289ll" podUID="263dbd91-db9c-477a-a06c-d9c084bd3693" containerName="registry-server" containerID="cri-o://22fda80a08c4c8d2371ac214e865508d651b2672fa8dfd86e17f756072db6bde" gracePeriod=2 Jan 29 08:56:12 crc kubenswrapper[4895]: I0129 08:56:12.712545 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-56sqg" event={"ID":"a833ad23-634a-4270-a6aa-267480e7bb2a","Type":"ContainerStarted","Data":"486c6d82110de3b13e8f85cebc2f2b0fab79538220611c146d38d49adfebf24a"} Jan 29 08:56:12 crc kubenswrapper[4895]: I0129 08:56:12.736171 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-289ll" podStartSLOduration=1.324759261 podStartE2EDuration="5.736145316s" podCreationTimestamp="2026-01-29 08:56:07 +0000 UTC" firstStartedPulling="2026-01-29 08:56:07.884507417 +0000 UTC m=+909.526015563" lastFinishedPulling="2026-01-29 08:56:12.295893472 +0000 UTC m=+913.937401618" observedRunningTime="2026-01-29 08:56:12.730551607 +0000 UTC m=+914.372059793" watchObservedRunningTime="2026-01-29 08:56:12.736145316 +0000 UTC m=+914.377653462" Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.069901 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-289ll" Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.212698 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79kcl\" (UniqueName: \"kubernetes.io/projected/263dbd91-db9c-477a-a06c-d9c084bd3693-kube-api-access-79kcl\") pod \"263dbd91-db9c-477a-a06c-d9c084bd3693\" (UID: \"263dbd91-db9c-477a-a06c-d9c084bd3693\") " Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.221172 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/263dbd91-db9c-477a-a06c-d9c084bd3693-kube-api-access-79kcl" (OuterVolumeSpecName: "kube-api-access-79kcl") pod "263dbd91-db9c-477a-a06c-d9c084bd3693" (UID: "263dbd91-db9c-477a-a06c-d9c084bd3693"). InnerVolumeSpecName "kube-api-access-79kcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.316407 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79kcl\" (UniqueName: \"kubernetes.io/projected/263dbd91-db9c-477a-a06c-d9c084bd3693-kube-api-access-79kcl\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.721087 4895 generic.go:334] "Generic (PLEG): container finished" podID="263dbd91-db9c-477a-a06c-d9c084bd3693" containerID="22fda80a08c4c8d2371ac214e865508d651b2672fa8dfd86e17f756072db6bde" exitCode=0 Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.721134 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-289ll" Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.721149 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-289ll" event={"ID":"263dbd91-db9c-477a-a06c-d9c084bd3693","Type":"ContainerDied","Data":"22fda80a08c4c8d2371ac214e865508d651b2672fa8dfd86e17f756072db6bde"} Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.721370 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-289ll" event={"ID":"263dbd91-db9c-477a-a06c-d9c084bd3693","Type":"ContainerDied","Data":"b37bb8af0492f29a2dda7045fbbb31fc2f03283e2eacc248acdac171ec96c5a3"} Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.721394 4895 scope.go:117] "RemoveContainer" containerID="22fda80a08c4c8d2371ac214e865508d651b2672fa8dfd86e17f756072db6bde" Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.723226 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-56sqg" event={"ID":"a833ad23-634a-4270-a6aa-267480e7bb2a","Type":"ContainerStarted","Data":"5a498063e0d0ae6c791616dbde3eb8d171f8a8482d4b555aa85b38ff26ef7b97"} Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.740445 4895 scope.go:117] "RemoveContainer" containerID="22fda80a08c4c8d2371ac214e865508d651b2672fa8dfd86e17f756072db6bde" Jan 29 08:56:13 crc kubenswrapper[4895]: E0129 08:56:13.741142 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22fda80a08c4c8d2371ac214e865508d651b2672fa8dfd86e17f756072db6bde\": container with ID starting with 22fda80a08c4c8d2371ac214e865508d651b2672fa8dfd86e17f756072db6bde not found: ID does not exist" containerID="22fda80a08c4c8d2371ac214e865508d651b2672fa8dfd86e17f756072db6bde" Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.741188 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22fda80a08c4c8d2371ac214e865508d651b2672fa8dfd86e17f756072db6bde"} err="failed to get container status \"22fda80a08c4c8d2371ac214e865508d651b2672fa8dfd86e17f756072db6bde\": rpc error: code = NotFound desc = could not find container \"22fda80a08c4c8d2371ac214e865508d651b2672fa8dfd86e17f756072db6bde\": container with ID starting with 22fda80a08c4c8d2371ac214e865508d651b2672fa8dfd86e17f756072db6bde not found: ID does not exist" Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.745757 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-56sqg" podStartSLOduration=3.688458621 podStartE2EDuration="3.74573623s" podCreationTimestamp="2026-01-29 08:56:10 +0000 UTC" firstStartedPulling="2026-01-29 08:56:12.698862882 +0000 UTC m=+914.340371028" lastFinishedPulling="2026-01-29 08:56:12.756140491 +0000 UTC m=+914.397648637" observedRunningTime="2026-01-29 08:56:13.745137614 +0000 UTC m=+915.386645770" watchObservedRunningTime="2026-01-29 08:56:13.74573623 +0000 UTC m=+915.387244376" Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.764646 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-289ll"] Jan 29 08:56:13 crc kubenswrapper[4895]: I0129 08:56:13.770337 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-289ll"] Jan 29 08:56:15 crc kubenswrapper[4895]: I0129 08:56:15.223349 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="263dbd91-db9c-477a-a06c-d9c084bd3693" path="/var/lib/kubelet/pods/263dbd91-db9c-477a-a06c-d9c084bd3693/volumes" Jan 29 08:56:16 crc kubenswrapper[4895]: I0129 08:56:16.020986 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:56:16 crc kubenswrapper[4895]: I0129 08:56:16.021098 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.294203 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vcdnq"] Jan 29 08:56:17 crc kubenswrapper[4895]: E0129 08:56:17.294535 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="263dbd91-db9c-477a-a06c-d9c084bd3693" containerName="registry-server" Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.294551 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="263dbd91-db9c-477a-a06c-d9c084bd3693" containerName="registry-server" Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.294716 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="263dbd91-db9c-477a-a06c-d9c084bd3693" containerName="registry-server" Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.295777 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.305105 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcdnq"] Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.480810 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-utilities\") pod \"certified-operators-vcdnq\" (UID: \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\") " pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.480884 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwm5m\" (UniqueName: \"kubernetes.io/projected/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-kube-api-access-jwm5m\") pod \"certified-operators-vcdnq\" (UID: \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\") " pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.481033 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-catalog-content\") pod \"certified-operators-vcdnq\" (UID: \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\") " pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.582949 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-utilities\") pod \"certified-operators-vcdnq\" (UID: \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\") " pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.583243 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwm5m\" (UniqueName: \"kubernetes.io/projected/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-kube-api-access-jwm5m\") pod \"certified-operators-vcdnq\" (UID: \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\") " pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.583359 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-catalog-content\") pod \"certified-operators-vcdnq\" (UID: \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\") " pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.583696 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-utilities\") pod \"certified-operators-vcdnq\" (UID: \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\") " pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.583889 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-catalog-content\") pod \"certified-operators-vcdnq\" (UID: \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\") " pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.610827 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwm5m\" (UniqueName: \"kubernetes.io/projected/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-kube-api-access-jwm5m\") pod \"certified-operators-vcdnq\" (UID: \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\") " pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:17 crc kubenswrapper[4895]: I0129 08:56:17.628520 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:18 crc kubenswrapper[4895]: I0129 08:56:18.113597 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcdnq"] Jan 29 08:56:18 crc kubenswrapper[4895]: I0129 08:56:18.786024 4895 generic.go:334] "Generic (PLEG): container finished" podID="f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" containerID="131dcd4e2bc22fcb8231e2cbb1520c4e2ef4239cd9f84dc92d32faf9f10c04e2" exitCode=0 Jan 29 08:56:18 crc kubenswrapper[4895]: I0129 08:56:18.786123 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcdnq" event={"ID":"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11","Type":"ContainerDied","Data":"131dcd4e2bc22fcb8231e2cbb1520c4e2ef4239cd9f84dc92d32faf9f10c04e2"} Jan 29 08:56:18 crc kubenswrapper[4895]: I0129 08:56:18.786204 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcdnq" event={"ID":"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11","Type":"ContainerStarted","Data":"0d797b1fcf6ba1764152805b92e643b5c35d7f67b894b625417b7e23959efce3"} Jan 29 08:56:19 crc kubenswrapper[4895]: I0129 08:56:19.893926 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcdnq" event={"ID":"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11","Type":"ContainerStarted","Data":"c90cc555fa8032aad574f3a1c3f903d5a9de47302ece296c117540f42b693bdc"} Jan 29 08:56:21 crc kubenswrapper[4895]: I0129 08:56:21.221071 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-56sqg" Jan 29 08:56:21 crc kubenswrapper[4895]: I0129 08:56:21.221507 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-56sqg" Jan 29 08:56:21 crc kubenswrapper[4895]: I0129 08:56:21.278546 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-56sqg" Jan 29 08:56:22 crc kubenswrapper[4895]: I0129 08:56:22.073428 4895 generic.go:334] "Generic (PLEG): container finished" podID="f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" containerID="c90cc555fa8032aad574f3a1c3f903d5a9de47302ece296c117540f42b693bdc" exitCode=0 Jan 29 08:56:22 crc kubenswrapper[4895]: I0129 08:56:22.073540 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcdnq" event={"ID":"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11","Type":"ContainerDied","Data":"c90cc555fa8032aad574f3a1c3f903d5a9de47302ece296c117540f42b693bdc"} Jan 29 08:56:22 crc kubenswrapper[4895]: I0129 08:56:22.113040 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-56sqg" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.086115 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcdnq" event={"ID":"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11","Type":"ContainerStarted","Data":"5290b70e342d03f48cc4d6cc50452815591ef56cdb9f4eadd5fcd74dfb8c7b8f"} Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.106008 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vcdnq" podStartSLOduration=2.35344802 podStartE2EDuration="6.105983738s" podCreationTimestamp="2026-01-29 08:56:17 +0000 UTC" firstStartedPulling="2026-01-29 08:56:18.788152159 +0000 UTC m=+920.429660305" lastFinishedPulling="2026-01-29 08:56:22.540687877 +0000 UTC m=+924.182196023" observedRunningTime="2026-01-29 08:56:23.103523022 +0000 UTC m=+924.745031168" watchObservedRunningTime="2026-01-29 08:56:23.105983738 +0000 UTC m=+924.747491884" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.531710 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8"] Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.533204 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.535768 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-mhq2t" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.543142 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8"] Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.675009 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1518b1d-569a-475c-ac03-5ccf624c3a36-util\") pod \"c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8\" (UID: \"f1518b1d-569a-475c-ac03-5ccf624c3a36\") " pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.675080 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1518b1d-569a-475c-ac03-5ccf624c3a36-bundle\") pod \"c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8\" (UID: \"f1518b1d-569a-475c-ac03-5ccf624c3a36\") " pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.675290 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd6kb\" (UniqueName: \"kubernetes.io/projected/f1518b1d-569a-475c-ac03-5ccf624c3a36-kube-api-access-zd6kb\") pod \"c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8\" (UID: \"f1518b1d-569a-475c-ac03-5ccf624c3a36\") " pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.693260 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-79z88"] Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.720329 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.735488 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-79z88"] Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.777351 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1518b1d-569a-475c-ac03-5ccf624c3a36-bundle\") pod \"c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8\" (UID: \"f1518b1d-569a-475c-ac03-5ccf624c3a36\") " pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.777470 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd6kb\" (UniqueName: \"kubernetes.io/projected/f1518b1d-569a-475c-ac03-5ccf624c3a36-kube-api-access-zd6kb\") pod \"c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8\" (UID: \"f1518b1d-569a-475c-ac03-5ccf624c3a36\") " pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.777551 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1518b1d-569a-475c-ac03-5ccf624c3a36-util\") pod \"c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8\" (UID: \"f1518b1d-569a-475c-ac03-5ccf624c3a36\") " pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.778104 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1518b1d-569a-475c-ac03-5ccf624c3a36-bundle\") pod \"c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8\" (UID: \"f1518b1d-569a-475c-ac03-5ccf624c3a36\") " pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.778229 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1518b1d-569a-475c-ac03-5ccf624c3a36-util\") pod \"c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8\" (UID: \"f1518b1d-569a-475c-ac03-5ccf624c3a36\") " pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.802572 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd6kb\" (UniqueName: \"kubernetes.io/projected/f1518b1d-569a-475c-ac03-5ccf624c3a36-kube-api-access-zd6kb\") pod \"c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8\" (UID: \"f1518b1d-569a-475c-ac03-5ccf624c3a36\") " pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.849151 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.879238 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219a2a8d-92c5-4d50-8039-7a9af898cf2d-utilities\") pod \"community-operators-79z88\" (UID: \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\") " pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.879305 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219a2a8d-92c5-4d50-8039-7a9af898cf2d-catalog-content\") pod \"community-operators-79z88\" (UID: \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\") " pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.879360 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkhff\" (UniqueName: \"kubernetes.io/projected/219a2a8d-92c5-4d50-8039-7a9af898cf2d-kube-api-access-kkhff\") pod \"community-operators-79z88\" (UID: \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\") " pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.983291 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219a2a8d-92c5-4d50-8039-7a9af898cf2d-utilities\") pod \"community-operators-79z88\" (UID: \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\") " pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.983835 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219a2a8d-92c5-4d50-8039-7a9af898cf2d-catalog-content\") pod \"community-operators-79z88\" (UID: \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\") " pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.983906 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkhff\" (UniqueName: \"kubernetes.io/projected/219a2a8d-92c5-4d50-8039-7a9af898cf2d-kube-api-access-kkhff\") pod \"community-operators-79z88\" (UID: \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\") " pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.984094 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219a2a8d-92c5-4d50-8039-7a9af898cf2d-utilities\") pod \"community-operators-79z88\" (UID: \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\") " pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:23 crc kubenswrapper[4895]: I0129 08:56:23.984151 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219a2a8d-92c5-4d50-8039-7a9af898cf2d-catalog-content\") pod \"community-operators-79z88\" (UID: \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\") " pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:24 crc kubenswrapper[4895]: I0129 08:56:24.002971 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkhff\" (UniqueName: \"kubernetes.io/projected/219a2a8d-92c5-4d50-8039-7a9af898cf2d-kube-api-access-kkhff\") pod \"community-operators-79z88\" (UID: \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\") " pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:24 crc kubenswrapper[4895]: I0129 08:56:24.047600 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:24 crc kubenswrapper[4895]: I0129 08:56:24.838062 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-79z88"] Jan 29 08:56:24 crc kubenswrapper[4895]: I0129 08:56:24.967074 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8"] Jan 29 08:56:25 crc kubenswrapper[4895]: I0129 08:56:25.116110 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" event={"ID":"f1518b1d-569a-475c-ac03-5ccf624c3a36","Type":"ContainerStarted","Data":"cc1c536acf52fecacf0dd50d4e11dd0ec5538504c3022f8316aaf4b579851a96"} Jan 29 08:56:25 crc kubenswrapper[4895]: I0129 08:56:25.120059 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79z88" event={"ID":"219a2a8d-92c5-4d50-8039-7a9af898cf2d","Type":"ContainerStarted","Data":"146a71fb422d9f556fac20f9b57753c45af389ece26458ce7e644cecc4d2861a"} Jan 29 08:56:25 crc kubenswrapper[4895]: I0129 08:56:25.120100 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79z88" event={"ID":"219a2a8d-92c5-4d50-8039-7a9af898cf2d","Type":"ContainerStarted","Data":"788b41180784088afbddab9abe1eaa2d33eec2650da0bbbdbb3a2ae8a70092dc"} Jan 29 08:56:26 crc kubenswrapper[4895]: I0129 08:56:26.127885 4895 generic.go:334] "Generic (PLEG): container finished" podID="f1518b1d-569a-475c-ac03-5ccf624c3a36" containerID="a0c2e6c17ddc66eee81cf22b717910a8a0bec733d5b103875ea81ecded405e19" exitCode=0 Jan 29 08:56:26 crc kubenswrapper[4895]: I0129 08:56:26.127978 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" event={"ID":"f1518b1d-569a-475c-ac03-5ccf624c3a36","Type":"ContainerDied","Data":"a0c2e6c17ddc66eee81cf22b717910a8a0bec733d5b103875ea81ecded405e19"} Jan 29 08:56:26 crc kubenswrapper[4895]: I0129 08:56:26.130282 4895 generic.go:334] "Generic (PLEG): container finished" podID="219a2a8d-92c5-4d50-8039-7a9af898cf2d" containerID="146a71fb422d9f556fac20f9b57753c45af389ece26458ce7e644cecc4d2861a" exitCode=0 Jan 29 08:56:26 crc kubenswrapper[4895]: I0129 08:56:26.130647 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79z88" event={"ID":"219a2a8d-92c5-4d50-8039-7a9af898cf2d","Type":"ContainerDied","Data":"146a71fb422d9f556fac20f9b57753c45af389ece26458ce7e644cecc4d2861a"} Jan 29 08:56:27 crc kubenswrapper[4895]: I0129 08:56:27.142093 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79z88" event={"ID":"219a2a8d-92c5-4d50-8039-7a9af898cf2d","Type":"ContainerStarted","Data":"e070440e110af510ad5b69bff0d6276c246dd9c4e344ce7d63b20601387bf122"} Jan 29 08:56:27 crc kubenswrapper[4895]: I0129 08:56:27.145712 4895 generic.go:334] "Generic (PLEG): container finished" podID="f1518b1d-569a-475c-ac03-5ccf624c3a36" containerID="44a0cc5324e0b3341609170d0845c469314754713625672c9e1cab098d8f4883" exitCode=0 Jan 29 08:56:27 crc kubenswrapper[4895]: I0129 08:56:27.145786 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" event={"ID":"f1518b1d-569a-475c-ac03-5ccf624c3a36","Type":"ContainerDied","Data":"44a0cc5324e0b3341609170d0845c469314754713625672c9e1cab098d8f4883"} Jan 29 08:56:27 crc kubenswrapper[4895]: I0129 08:56:27.629101 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:27 crc kubenswrapper[4895]: I0129 08:56:27.629242 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:27 crc kubenswrapper[4895]: I0129 08:56:27.707843 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:28 crc kubenswrapper[4895]: I0129 08:56:28.154853 4895 generic.go:334] "Generic (PLEG): container finished" podID="219a2a8d-92c5-4d50-8039-7a9af898cf2d" containerID="e070440e110af510ad5b69bff0d6276c246dd9c4e344ce7d63b20601387bf122" exitCode=0 Jan 29 08:56:28 crc kubenswrapper[4895]: I0129 08:56:28.154933 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79z88" event={"ID":"219a2a8d-92c5-4d50-8039-7a9af898cf2d","Type":"ContainerDied","Data":"e070440e110af510ad5b69bff0d6276c246dd9c4e344ce7d63b20601387bf122"} Jan 29 08:56:28 crc kubenswrapper[4895]: I0129 08:56:28.158392 4895 generic.go:334] "Generic (PLEG): container finished" podID="f1518b1d-569a-475c-ac03-5ccf624c3a36" containerID="4a109f55549d14c57e3217ca3477d4a6c01f6fc44db09a89d48326f667bf91bc" exitCode=0 Jan 29 08:56:28 crc kubenswrapper[4895]: I0129 08:56:28.158456 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" event={"ID":"f1518b1d-569a-475c-ac03-5ccf624c3a36","Type":"ContainerDied","Data":"4a109f55549d14c57e3217ca3477d4a6c01f6fc44db09a89d48326f667bf91bc"} Jan 29 08:56:28 crc kubenswrapper[4895]: I0129 08:56:28.203405 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:29 crc kubenswrapper[4895]: I0129 08:56:29.526773 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" Jan 29 08:56:29 crc kubenswrapper[4895]: I0129 08:56:29.685225 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd6kb\" (UniqueName: \"kubernetes.io/projected/f1518b1d-569a-475c-ac03-5ccf624c3a36-kube-api-access-zd6kb\") pod \"f1518b1d-569a-475c-ac03-5ccf624c3a36\" (UID: \"f1518b1d-569a-475c-ac03-5ccf624c3a36\") " Jan 29 08:56:29 crc kubenswrapper[4895]: I0129 08:56:29.685297 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1518b1d-569a-475c-ac03-5ccf624c3a36-bundle\") pod \"f1518b1d-569a-475c-ac03-5ccf624c3a36\" (UID: \"f1518b1d-569a-475c-ac03-5ccf624c3a36\") " Jan 29 08:56:29 crc kubenswrapper[4895]: I0129 08:56:29.685366 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1518b1d-569a-475c-ac03-5ccf624c3a36-util\") pod \"f1518b1d-569a-475c-ac03-5ccf624c3a36\" (UID: \"f1518b1d-569a-475c-ac03-5ccf624c3a36\") " Jan 29 08:56:29 crc kubenswrapper[4895]: I0129 08:56:29.686383 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1518b1d-569a-475c-ac03-5ccf624c3a36-bundle" (OuterVolumeSpecName: "bundle") pod "f1518b1d-569a-475c-ac03-5ccf624c3a36" (UID: "f1518b1d-569a-475c-ac03-5ccf624c3a36"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:56:29 crc kubenswrapper[4895]: I0129 08:56:29.692858 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1518b1d-569a-475c-ac03-5ccf624c3a36-kube-api-access-zd6kb" (OuterVolumeSpecName: "kube-api-access-zd6kb") pod "f1518b1d-569a-475c-ac03-5ccf624c3a36" (UID: "f1518b1d-569a-475c-ac03-5ccf624c3a36"). InnerVolumeSpecName "kube-api-access-zd6kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:56:29 crc kubenswrapper[4895]: I0129 08:56:29.700885 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1518b1d-569a-475c-ac03-5ccf624c3a36-util" (OuterVolumeSpecName: "util") pod "f1518b1d-569a-475c-ac03-5ccf624c3a36" (UID: "f1518b1d-569a-475c-ac03-5ccf624c3a36"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:56:29 crc kubenswrapper[4895]: I0129 08:56:29.786674 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd6kb\" (UniqueName: \"kubernetes.io/projected/f1518b1d-569a-475c-ac03-5ccf624c3a36-kube-api-access-zd6kb\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:29 crc kubenswrapper[4895]: I0129 08:56:29.786714 4895 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1518b1d-569a-475c-ac03-5ccf624c3a36-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:29 crc kubenswrapper[4895]: I0129 08:56:29.786728 4895 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1518b1d-569a-475c-ac03-5ccf624c3a36-util\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:30 crc kubenswrapper[4895]: I0129 08:56:30.175731 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79z88" event={"ID":"219a2a8d-92c5-4d50-8039-7a9af898cf2d","Type":"ContainerStarted","Data":"585b955a166e1a0623fff67311ab71d22767853d18b1ad54c632e76ef57c406e"} Jan 29 08:56:30 crc kubenswrapper[4895]: I0129 08:56:30.179348 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" event={"ID":"f1518b1d-569a-475c-ac03-5ccf624c3a36","Type":"ContainerDied","Data":"cc1c536acf52fecacf0dd50d4e11dd0ec5538504c3022f8316aaf4b579851a96"} Jan 29 08:56:30 crc kubenswrapper[4895]: I0129 08:56:30.179377 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8" Jan 29 08:56:30 crc kubenswrapper[4895]: I0129 08:56:30.179395 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc1c536acf52fecacf0dd50d4e11dd0ec5538504c3022f8316aaf4b579851a96" Jan 29 08:56:30 crc kubenswrapper[4895]: I0129 08:56:30.201986 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-79z88" podStartSLOduration=3.455023872 podStartE2EDuration="7.201959001s" podCreationTimestamp="2026-01-29 08:56:23 +0000 UTC" firstStartedPulling="2026-01-29 08:56:26.131849211 +0000 UTC m=+927.773357347" lastFinishedPulling="2026-01-29 08:56:29.87878431 +0000 UTC m=+931.520292476" observedRunningTime="2026-01-29 08:56:30.19780104 +0000 UTC m=+931.839309186" watchObservedRunningTime="2026-01-29 08:56:30.201959001 +0000 UTC m=+931.843467147" Jan 29 08:56:30 crc kubenswrapper[4895]: I0129 08:56:30.480963 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vcdnq"] Jan 29 08:56:31 crc kubenswrapper[4895]: I0129 08:56:31.187102 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vcdnq" podUID="f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" containerName="registry-server" containerID="cri-o://5290b70e342d03f48cc4d6cc50452815591ef56cdb9f4eadd5fcd74dfb8c7b8f" gracePeriod=2 Jan 29 08:56:31 crc kubenswrapper[4895]: I0129 08:56:31.576628 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:31 crc kubenswrapper[4895]: I0129 08:56:31.720548 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-catalog-content\") pod \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\" (UID: \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\") " Jan 29 08:56:31 crc kubenswrapper[4895]: I0129 08:56:31.720691 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-utilities\") pod \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\" (UID: \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\") " Jan 29 08:56:31 crc kubenswrapper[4895]: I0129 08:56:31.720841 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwm5m\" (UniqueName: \"kubernetes.io/projected/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-kube-api-access-jwm5m\") pod \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\" (UID: \"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11\") " Jan 29 08:56:31 crc kubenswrapper[4895]: I0129 08:56:31.721560 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-utilities" (OuterVolumeSpecName: "utilities") pod "f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" (UID: "f1dcf7d1-0b00-4458-bebe-3c75e15e6f11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:56:31 crc kubenswrapper[4895]: I0129 08:56:31.727578 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-kube-api-access-jwm5m" (OuterVolumeSpecName: "kube-api-access-jwm5m") pod "f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" (UID: "f1dcf7d1-0b00-4458-bebe-3c75e15e6f11"). InnerVolumeSpecName "kube-api-access-jwm5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:56:31 crc kubenswrapper[4895]: I0129 08:56:31.766502 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" (UID: "f1dcf7d1-0b00-4458-bebe-3c75e15e6f11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:56:31 crc kubenswrapper[4895]: I0129 08:56:31.823016 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:31 crc kubenswrapper[4895]: I0129 08:56:31.823070 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:31 crc kubenswrapper[4895]: I0129 08:56:31.823091 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwm5m\" (UniqueName: \"kubernetes.io/projected/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11-kube-api-access-jwm5m\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.206007 4895 generic.go:334] "Generic (PLEG): container finished" podID="f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" containerID="5290b70e342d03f48cc4d6cc50452815591ef56cdb9f4eadd5fcd74dfb8c7b8f" exitCode=0 Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.206078 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcdnq" event={"ID":"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11","Type":"ContainerDied","Data":"5290b70e342d03f48cc4d6cc50452815591ef56cdb9f4eadd5fcd74dfb8c7b8f"} Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.206134 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcdnq" Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.206159 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcdnq" event={"ID":"f1dcf7d1-0b00-4458-bebe-3c75e15e6f11","Type":"ContainerDied","Data":"0d797b1fcf6ba1764152805b92e643b5c35d7f67b894b625417b7e23959efce3"} Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.206193 4895 scope.go:117] "RemoveContainer" containerID="5290b70e342d03f48cc4d6cc50452815591ef56cdb9f4eadd5fcd74dfb8c7b8f" Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.230394 4895 scope.go:117] "RemoveContainer" containerID="c90cc555fa8032aad574f3a1c3f903d5a9de47302ece296c117540f42b693bdc" Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.235813 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vcdnq"] Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.246941 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vcdnq"] Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.252429 4895 scope.go:117] "RemoveContainer" containerID="131dcd4e2bc22fcb8231e2cbb1520c4e2ef4239cd9f84dc92d32faf9f10c04e2" Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.272521 4895 scope.go:117] "RemoveContainer" containerID="5290b70e342d03f48cc4d6cc50452815591ef56cdb9f4eadd5fcd74dfb8c7b8f" Jan 29 08:56:32 crc kubenswrapper[4895]: E0129 08:56:32.273069 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5290b70e342d03f48cc4d6cc50452815591ef56cdb9f4eadd5fcd74dfb8c7b8f\": container with ID starting with 5290b70e342d03f48cc4d6cc50452815591ef56cdb9f4eadd5fcd74dfb8c7b8f not found: ID does not exist" containerID="5290b70e342d03f48cc4d6cc50452815591ef56cdb9f4eadd5fcd74dfb8c7b8f" Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.273107 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5290b70e342d03f48cc4d6cc50452815591ef56cdb9f4eadd5fcd74dfb8c7b8f"} err="failed to get container status \"5290b70e342d03f48cc4d6cc50452815591ef56cdb9f4eadd5fcd74dfb8c7b8f\": rpc error: code = NotFound desc = could not find container \"5290b70e342d03f48cc4d6cc50452815591ef56cdb9f4eadd5fcd74dfb8c7b8f\": container with ID starting with 5290b70e342d03f48cc4d6cc50452815591ef56cdb9f4eadd5fcd74dfb8c7b8f not found: ID does not exist" Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.273133 4895 scope.go:117] "RemoveContainer" containerID="c90cc555fa8032aad574f3a1c3f903d5a9de47302ece296c117540f42b693bdc" Jan 29 08:56:32 crc kubenswrapper[4895]: E0129 08:56:32.273714 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c90cc555fa8032aad574f3a1c3f903d5a9de47302ece296c117540f42b693bdc\": container with ID starting with c90cc555fa8032aad574f3a1c3f903d5a9de47302ece296c117540f42b693bdc not found: ID does not exist" containerID="c90cc555fa8032aad574f3a1c3f903d5a9de47302ece296c117540f42b693bdc" Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.273771 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c90cc555fa8032aad574f3a1c3f903d5a9de47302ece296c117540f42b693bdc"} err="failed to get container status \"c90cc555fa8032aad574f3a1c3f903d5a9de47302ece296c117540f42b693bdc\": rpc error: code = NotFound desc = could not find container \"c90cc555fa8032aad574f3a1c3f903d5a9de47302ece296c117540f42b693bdc\": container with ID starting with c90cc555fa8032aad574f3a1c3f903d5a9de47302ece296c117540f42b693bdc not found: ID does not exist" Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.273813 4895 scope.go:117] "RemoveContainer" containerID="131dcd4e2bc22fcb8231e2cbb1520c4e2ef4239cd9f84dc92d32faf9f10c04e2" Jan 29 08:56:32 crc kubenswrapper[4895]: E0129 08:56:32.274500 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"131dcd4e2bc22fcb8231e2cbb1520c4e2ef4239cd9f84dc92d32faf9f10c04e2\": container with ID starting with 131dcd4e2bc22fcb8231e2cbb1520c4e2ef4239cd9f84dc92d32faf9f10c04e2 not found: ID does not exist" containerID="131dcd4e2bc22fcb8231e2cbb1520c4e2ef4239cd9f84dc92d32faf9f10c04e2" Jan 29 08:56:32 crc kubenswrapper[4895]: I0129 08:56:32.274532 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"131dcd4e2bc22fcb8231e2cbb1520c4e2ef4239cd9f84dc92d32faf9f10c04e2"} err="failed to get container status \"131dcd4e2bc22fcb8231e2cbb1520c4e2ef4239cd9f84dc92d32faf9f10c04e2\": rpc error: code = NotFound desc = could not find container \"131dcd4e2bc22fcb8231e2cbb1520c4e2ef4239cd9f84dc92d32faf9f10c04e2\": container with ID starting with 131dcd4e2bc22fcb8231e2cbb1520c4e2ef4239cd9f84dc92d32faf9f10c04e2 not found: ID does not exist" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.219063 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" path="/var/lib/kubelet/pods/f1dcf7d1-0b00-4458-bebe-3c75e15e6f11/volumes" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.691594 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-777976898d-2mx8n"] Jan 29 08:56:33 crc kubenswrapper[4895]: E0129 08:56:33.691896 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1518b1d-569a-475c-ac03-5ccf624c3a36" containerName="pull" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.691911 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1518b1d-569a-475c-ac03-5ccf624c3a36" containerName="pull" Jan 29 08:56:33 crc kubenswrapper[4895]: E0129 08:56:33.691957 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" containerName="extract-utilities" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.691969 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" containerName="extract-utilities" Jan 29 08:56:33 crc kubenswrapper[4895]: E0129 08:56:33.691984 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1518b1d-569a-475c-ac03-5ccf624c3a36" containerName="util" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.691994 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1518b1d-569a-475c-ac03-5ccf624c3a36" containerName="util" Jan 29 08:56:33 crc kubenswrapper[4895]: E0129 08:56:33.692008 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" containerName="extract-content" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.692018 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" containerName="extract-content" Jan 29 08:56:33 crc kubenswrapper[4895]: E0129 08:56:33.692032 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" containerName="registry-server" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.692039 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" containerName="registry-server" Jan 29 08:56:33 crc kubenswrapper[4895]: E0129 08:56:33.692048 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1518b1d-569a-475c-ac03-5ccf624c3a36" containerName="extract" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.692054 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1518b1d-569a-475c-ac03-5ccf624c3a36" containerName="extract" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.692186 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1dcf7d1-0b00-4458-bebe-3c75e15e6f11" containerName="registry-server" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.692203 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1518b1d-569a-475c-ac03-5ccf624c3a36" containerName="extract" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.692640 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-777976898d-2mx8n" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.697724 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-vd4fw" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.730585 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-777976898d-2mx8n"] Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.852818 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfg4\" (UniqueName: \"kubernetes.io/projected/5567d75e-d4d1-4f59-a79b-b185eaadd750-kube-api-access-4qfg4\") pod \"openstack-operator-controller-init-777976898d-2mx8n\" (UID: \"5567d75e-d4d1-4f59-a79b-b185eaadd750\") " pod="openstack-operators/openstack-operator-controller-init-777976898d-2mx8n" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.955033 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qfg4\" (UniqueName: \"kubernetes.io/projected/5567d75e-d4d1-4f59-a79b-b185eaadd750-kube-api-access-4qfg4\") pod \"openstack-operator-controller-init-777976898d-2mx8n\" (UID: \"5567d75e-d4d1-4f59-a79b-b185eaadd750\") " pod="openstack-operators/openstack-operator-controller-init-777976898d-2mx8n" Jan 29 08:56:33 crc kubenswrapper[4895]: I0129 08:56:33.990409 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qfg4\" (UniqueName: \"kubernetes.io/projected/5567d75e-d4d1-4f59-a79b-b185eaadd750-kube-api-access-4qfg4\") pod \"openstack-operator-controller-init-777976898d-2mx8n\" (UID: \"5567d75e-d4d1-4f59-a79b-b185eaadd750\") " pod="openstack-operators/openstack-operator-controller-init-777976898d-2mx8n" Jan 29 08:56:34 crc kubenswrapper[4895]: I0129 08:56:34.009790 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-777976898d-2mx8n" Jan 29 08:56:34 crc kubenswrapper[4895]: I0129 08:56:34.048050 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:34 crc kubenswrapper[4895]: I0129 08:56:34.049063 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:34 crc kubenswrapper[4895]: I0129 08:56:34.133389 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:34 crc kubenswrapper[4895]: I0129 08:56:34.283940 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-777976898d-2mx8n"] Jan 29 08:56:34 crc kubenswrapper[4895]: W0129 08:56:34.299390 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5567d75e_d4d1_4f59_a79b_b185eaadd750.slice/crio-0cd8d7d0e8dbce8d1936b426e02adf1c33dfa06bdea82d4d23e7e2f728f462c2 WatchSource:0}: Error finding container 0cd8d7d0e8dbce8d1936b426e02adf1c33dfa06bdea82d4d23e7e2f728f462c2: Status 404 returned error can't find the container with id 0cd8d7d0e8dbce8d1936b426e02adf1c33dfa06bdea82d4d23e7e2f728f462c2 Jan 29 08:56:35 crc kubenswrapper[4895]: I0129 08:56:35.243209 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-777976898d-2mx8n" event={"ID":"5567d75e-d4d1-4f59-a79b-b185eaadd750","Type":"ContainerStarted","Data":"0cd8d7d0e8dbce8d1936b426e02adf1c33dfa06bdea82d4d23e7e2f728f462c2"} Jan 29 08:56:35 crc kubenswrapper[4895]: I0129 08:56:35.333332 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:37 crc kubenswrapper[4895]: I0129 08:56:37.883645 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-79z88"] Jan 29 08:56:37 crc kubenswrapper[4895]: I0129 08:56:37.884204 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-79z88" podUID="219a2a8d-92c5-4d50-8039-7a9af898cf2d" containerName="registry-server" containerID="cri-o://585b955a166e1a0623fff67311ab71d22767853d18b1ad54c632e76ef57c406e" gracePeriod=2 Jan 29 08:56:39 crc kubenswrapper[4895]: I0129 08:56:39.302416 4895 generic.go:334] "Generic (PLEG): container finished" podID="219a2a8d-92c5-4d50-8039-7a9af898cf2d" containerID="585b955a166e1a0623fff67311ab71d22767853d18b1ad54c632e76ef57c406e" exitCode=0 Jan 29 08:56:39 crc kubenswrapper[4895]: I0129 08:56:39.302594 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79z88" event={"ID":"219a2a8d-92c5-4d50-8039-7a9af898cf2d","Type":"ContainerDied","Data":"585b955a166e1a0623fff67311ab71d22767853d18b1ad54c632e76ef57c406e"} Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.149969 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.260139 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219a2a8d-92c5-4d50-8039-7a9af898cf2d-catalog-content\") pod \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\" (UID: \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\") " Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.260358 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkhff\" (UniqueName: \"kubernetes.io/projected/219a2a8d-92c5-4d50-8039-7a9af898cf2d-kube-api-access-kkhff\") pod \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\" (UID: \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\") " Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.260469 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219a2a8d-92c5-4d50-8039-7a9af898cf2d-utilities\") pod \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\" (UID: \"219a2a8d-92c5-4d50-8039-7a9af898cf2d\") " Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.263226 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/219a2a8d-92c5-4d50-8039-7a9af898cf2d-utilities" (OuterVolumeSpecName: "utilities") pod "219a2a8d-92c5-4d50-8039-7a9af898cf2d" (UID: "219a2a8d-92c5-4d50-8039-7a9af898cf2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.282691 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/219a2a8d-92c5-4d50-8039-7a9af898cf2d-kube-api-access-kkhff" (OuterVolumeSpecName: "kube-api-access-kkhff") pod "219a2a8d-92c5-4d50-8039-7a9af898cf2d" (UID: "219a2a8d-92c5-4d50-8039-7a9af898cf2d"). InnerVolumeSpecName "kube-api-access-kkhff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.323394 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/219a2a8d-92c5-4d50-8039-7a9af898cf2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "219a2a8d-92c5-4d50-8039-7a9af898cf2d" (UID: "219a2a8d-92c5-4d50-8039-7a9af898cf2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.333160 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79z88" event={"ID":"219a2a8d-92c5-4d50-8039-7a9af898cf2d","Type":"ContainerDied","Data":"788b41180784088afbddab9abe1eaa2d33eec2650da0bbbdbb3a2ae8a70092dc"} Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.333388 4895 scope.go:117] "RemoveContainer" containerID="585b955a166e1a0623fff67311ab71d22767853d18b1ad54c632e76ef57c406e" Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.333376 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79z88" Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.355936 4895 scope.go:117] "RemoveContainer" containerID="e070440e110af510ad5b69bff0d6276c246dd9c4e344ce7d63b20601387bf122" Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.364656 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/219a2a8d-92c5-4d50-8039-7a9af898cf2d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.364736 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/219a2a8d-92c5-4d50-8039-7a9af898cf2d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.364761 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkhff\" (UniqueName: \"kubernetes.io/projected/219a2a8d-92c5-4d50-8039-7a9af898cf2d-kube-api-access-kkhff\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.379026 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-79z88"] Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.387228 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-79z88"] Jan 29 08:56:41 crc kubenswrapper[4895]: I0129 08:56:41.390775 4895 scope.go:117] "RemoveContainer" containerID="146a71fb422d9f556fac20f9b57753c45af389ece26458ce7e644cecc4d2861a" Jan 29 08:56:42 crc kubenswrapper[4895]: I0129 08:56:42.342607 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-777976898d-2mx8n" event={"ID":"5567d75e-d4d1-4f59-a79b-b185eaadd750","Type":"ContainerStarted","Data":"d90ae8ca988b812164645906c0ab03e11cedf70725ea928b4179f5f22e1d3fb4"} Jan 29 08:56:42 crc kubenswrapper[4895]: I0129 08:56:42.342750 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-777976898d-2mx8n" Jan 29 08:56:42 crc kubenswrapper[4895]: I0129 08:56:42.375682 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-777976898d-2mx8n" podStartSLOduration=2.509256997 podStartE2EDuration="9.375661506s" podCreationTimestamp="2026-01-29 08:56:33 +0000 UTC" firstStartedPulling="2026-01-29 08:56:34.304067525 +0000 UTC m=+935.945575671" lastFinishedPulling="2026-01-29 08:56:41.170472024 +0000 UTC m=+942.811980180" observedRunningTime="2026-01-29 08:56:42.373139268 +0000 UTC m=+944.014647424" watchObservedRunningTime="2026-01-29 08:56:42.375661506 +0000 UTC m=+944.017169642" Jan 29 08:56:43 crc kubenswrapper[4895]: I0129 08:56:43.220959 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="219a2a8d-92c5-4d50-8039-7a9af898cf2d" path="/var/lib/kubelet/pods/219a2a8d-92c5-4d50-8039-7a9af898cf2d/volumes" Jan 29 08:56:46 crc kubenswrapper[4895]: I0129 08:56:46.021347 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:56:46 crc kubenswrapper[4895]: I0129 08:56:46.021433 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:56:54 crc kubenswrapper[4895]: I0129 08:56:54.013539 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-777976898d-2mx8n" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.800283 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d"] Jan 29 08:57:13 crc kubenswrapper[4895]: E0129 08:57:13.801876 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219a2a8d-92c5-4d50-8039-7a9af898cf2d" containerName="extract-content" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.802241 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="219a2a8d-92c5-4d50-8039-7a9af898cf2d" containerName="extract-content" Jan 29 08:57:13 crc kubenswrapper[4895]: E0129 08:57:13.802275 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219a2a8d-92c5-4d50-8039-7a9af898cf2d" containerName="registry-server" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.802288 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="219a2a8d-92c5-4d50-8039-7a9af898cf2d" containerName="registry-server" Jan 29 08:57:13 crc kubenswrapper[4895]: E0129 08:57:13.802315 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219a2a8d-92c5-4d50-8039-7a9af898cf2d" containerName="extract-utilities" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.802332 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="219a2a8d-92c5-4d50-8039-7a9af898cf2d" containerName="extract-utilities" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.802646 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="219a2a8d-92c5-4d50-8039-7a9af898cf2d" containerName="registry-server" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.803407 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.808432 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-6cz2h"] Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.809772 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6cz2h" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.810931 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-n992k" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.814014 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-sr25x" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.821706 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-6cz2h"] Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.832027 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d"] Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.838189 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj"] Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.842115 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.850721 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-r87x4" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.862340 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj"] Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.882393 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l"] Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.883218 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jmlm\" (UniqueName: \"kubernetes.io/projected/bc16fc79-c074-4969-af29-c46fdd06f9f8-kube-api-access-9jmlm\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-gd75d\" (UID: \"bc16fc79-c074-4969-af29-c46fdd06f9f8\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.883341 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdff6\" (UniqueName: \"kubernetes.io/projected/d4d2a9b0-6258-4257-9824-74abbbc40b24-kube-api-access-vdff6\") pod \"cinder-operator-controller-manager-8d874c8fc-6cz2h\" (UID: \"d4d2a9b0-6258-4257-9824-74abbbc40b24\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6cz2h" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.893841 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.981232 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-r6ftq" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.984934 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdff6\" (UniqueName: \"kubernetes.io/projected/d4d2a9b0-6258-4257-9824-74abbbc40b24-kube-api-access-vdff6\") pod \"cinder-operator-controller-manager-8d874c8fc-6cz2h\" (UID: \"d4d2a9b0-6258-4257-9824-74abbbc40b24\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6cz2h" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.985007 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7wdr\" (UniqueName: \"kubernetes.io/projected/b2dd46da-1ebf-489f-8467-eab7fc206736-kube-api-access-c7wdr\") pod \"designate-operator-controller-manager-6d9697b7f4-58zzj\" (UID: \"b2dd46da-1ebf-489f-8467-eab7fc206736\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.985086 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jmlm\" (UniqueName: \"kubernetes.io/projected/bc16fc79-c074-4969-af29-c46fdd06f9f8-kube-api-access-9jmlm\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-gd75d\" (UID: \"bc16fc79-c074-4969-af29-c46fdd06f9f8\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d" Jan 29 08:57:13 crc kubenswrapper[4895]: I0129 08:57:13.985177 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9jpd\" (UniqueName: \"kubernetes.io/projected/e97a1d25-e9ba-4ce2-b172-035afb18721b-kube-api-access-b9jpd\") pod \"glance-operator-controller-manager-8886f4c47-7hp5l\" (UID: \"e97a1d25-e9ba-4ce2-b172-035afb18721b\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.029866 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jmlm\" (UniqueName: \"kubernetes.io/projected/bc16fc79-c074-4969-af29-c46fdd06f9f8-kube-api-access-9jmlm\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-gd75d\" (UID: \"bc16fc79-c074-4969-af29-c46fdd06f9f8\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.042578 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdff6\" (UniqueName: \"kubernetes.io/projected/d4d2a9b0-6258-4257-9824-74abbbc40b24-kube-api-access-vdff6\") pod \"cinder-operator-controller-manager-8d874c8fc-6cz2h\" (UID: \"d4d2a9b0-6258-4257-9824-74abbbc40b24\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6cz2h" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.044832 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-sdkzk"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.046096 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sdkzk" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.054417 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mzsz8" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.058259 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.059444 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.061479 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.062076 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-k87d5" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.087011 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7wdr\" (UniqueName: \"kubernetes.io/projected/b2dd46da-1ebf-489f-8467-eab7fc206736-kube-api-access-c7wdr\") pod \"designate-operator-controller-manager-6d9697b7f4-58zzj\" (UID: \"b2dd46da-1ebf-489f-8467-eab7fc206736\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.087192 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9jpd\" (UniqueName: \"kubernetes.io/projected/e97a1d25-e9ba-4ce2-b172-035afb18721b-kube-api-access-b9jpd\") pod \"glance-operator-controller-manager-8886f4c47-7hp5l\" (UID: \"e97a1d25-e9ba-4ce2-b172-035afb18721b\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.088711 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-sdkzk"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.113661 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.119608 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7wdr\" (UniqueName: \"kubernetes.io/projected/b2dd46da-1ebf-489f-8467-eab7fc206736-kube-api-access-c7wdr\") pod \"designate-operator-controller-manager-6d9697b7f4-58zzj\" (UID: \"b2dd46da-1ebf-489f-8467-eab7fc206736\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.124738 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-tptkw"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.126545 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.134344 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.134763 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-szhj7" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.135252 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.140751 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9jpd\" (UniqueName: \"kubernetes.io/projected/e97a1d25-e9ba-4ce2-b172-035afb18721b-kube-api-access-b9jpd\") pod \"glance-operator-controller-manager-8886f4c47-7hp5l\" (UID: \"e97a1d25-e9ba-4ce2-b172-035afb18721b\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.157700 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6cz2h" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.191216 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnxh8\" (UniqueName: \"kubernetes.io/projected/5e73fff0-3497-4937-bfe0-10bea87ddeb3-kube-api-access-vnxh8\") pod \"heat-operator-controller-manager-69d6db494d-sdkzk\" (UID: \"5e73fff0-3497-4937-bfe0-10bea87ddeb3\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sdkzk" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.191310 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h456t\" (UniqueName: \"kubernetes.io/projected/ad4a2a80-d64f-45b0-bea8-dc2e5c7ea050-kube-api-access-h456t\") pod \"horizon-operator-controller-manager-5fb775575f-9dpss\" (UID: \"ad4a2a80-d64f-45b0-bea8-dc2e5c7ea050\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.222621 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-tptkw"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.231686 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.233011 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.239799 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-864sq" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.295778 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.302062 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.309034 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h456t\" (UniqueName: \"kubernetes.io/projected/ad4a2a80-d64f-45b0-bea8-dc2e5c7ea050-kube-api-access-h456t\") pod \"horizon-operator-controller-manager-5fb775575f-9dpss\" (UID: \"ad4a2a80-d64f-45b0-bea8-dc2e5c7ea050\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.309239 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtlh7\" (UniqueName: \"kubernetes.io/projected/cbca22f6-6189-4f59-b9bd-832466c437d1-kube-api-access-wtlh7\") pod \"infra-operator-controller-manager-79955696d6-tptkw\" (UID: \"cbca22f6-6189-4f59-b9bd-832466c437d1\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.309371 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnxh8\" (UniqueName: \"kubernetes.io/projected/5e73fff0-3497-4937-bfe0-10bea87ddeb3-kube-api-access-vnxh8\") pod \"heat-operator-controller-manager-69d6db494d-sdkzk\" (UID: \"5e73fff0-3497-4937-bfe0-10bea87ddeb3\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sdkzk" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.309447 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert\") pod \"infra-operator-controller-manager-79955696d6-tptkw\" (UID: \"cbca22f6-6189-4f59-b9bd-832466c437d1\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.319022 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.356802 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnxh8\" (UniqueName: \"kubernetes.io/projected/5e73fff0-3497-4937-bfe0-10bea87ddeb3-kube-api-access-vnxh8\") pod \"heat-operator-controller-manager-69d6db494d-sdkzk\" (UID: \"5e73fff0-3497-4937-bfe0-10bea87ddeb3\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sdkzk" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.358582 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h456t\" (UniqueName: \"kubernetes.io/projected/ad4a2a80-d64f-45b0-bea8-dc2e5c7ea050-kube-api-access-h456t\") pod \"horizon-operator-controller-manager-5fb775575f-9dpss\" (UID: \"ad4a2a80-d64f-45b0-bea8-dc2e5c7ea050\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.392016 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.393303 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.401742 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-lkndb" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.402814 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sdkzk" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.423308 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert\") pod \"infra-operator-controller-manager-79955696d6-tptkw\" (UID: \"cbca22f6-6189-4f59-b9bd-832466c437d1\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.423455 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtlh7\" (UniqueName: \"kubernetes.io/projected/cbca22f6-6189-4f59-b9bd-832466c437d1-kube-api-access-wtlh7\") pod \"infra-operator-controller-manager-79955696d6-tptkw\" (UID: \"cbca22f6-6189-4f59-b9bd-832466c437d1\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.423516 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9pzh\" (UniqueName: \"kubernetes.io/projected/baa89b4d-cf32-498b-a624-585afea7f964-kube-api-access-n9pzh\") pod \"ironic-operator-controller-manager-54c4948594-l45qb\" (UID: \"baa89b4d-cf32-498b-a624-585afea7f964\") " pod="openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb" Jan 29 08:57:14 crc kubenswrapper[4895]: E0129 08:57:14.423702 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 08:57:14 crc kubenswrapper[4895]: E0129 08:57:14.423776 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert podName:cbca22f6-6189-4f59-b9bd-832466c437d1 nodeName:}" failed. No retries permitted until 2026-01-29 08:57:14.923748497 +0000 UTC m=+976.565256643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert") pod "infra-operator-controller-manager-79955696d6-tptkw" (UID: "cbca22f6-6189-4f59-b9bd-832466c437d1") : secret "infra-operator-webhook-server-cert" not found Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.424501 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.438327 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-pq8r4"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.439727 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-pq8r4" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.442430 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-dp8c7" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.460710 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-dg5kf"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.461972 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dg5kf" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.466361 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-trzw8" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.484244 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.496128 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-pq8r4"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.510350 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-dg5kf"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.527811 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9pzh\" (UniqueName: \"kubernetes.io/projected/baa89b4d-cf32-498b-a624-585afea7f964-kube-api-access-n9pzh\") pod \"ironic-operator-controller-manager-54c4948594-l45qb\" (UID: \"baa89b4d-cf32-498b-a624-585afea7f964\") " pod="openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.535160 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j65wb\" (UniqueName: \"kubernetes.io/projected/348e067e-1b54-43e2-9c01-bf430f7a3630-kube-api-access-j65wb\") pod \"keystone-operator-controller-manager-84f48565d4-8t4nd\" (UID: \"348e067e-1b54-43e2-9c01-bf430f7a3630\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.536066 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.537209 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.545475 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-zprnp" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.546307 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.547501 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.547534 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.550011 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtlh7\" (UniqueName: \"kubernetes.io/projected/cbca22f6-6189-4f59-b9bd-832466c437d1-kube-api-access-wtlh7\") pod \"infra-operator-controller-manager-79955696d6-tptkw\" (UID: \"cbca22f6-6189-4f59-b9bd-832466c437d1\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.551590 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-r7js9" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.570738 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9pzh\" (UniqueName: \"kubernetes.io/projected/baa89b4d-cf32-498b-a624-585afea7f964-kube-api-access-n9pzh\") pod \"ironic-operator-controller-manager-54c4948594-l45qb\" (UID: \"baa89b4d-cf32-498b-a624-585afea7f964\") " pod="openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.570837 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.572115 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.573148 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.577406 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-chzgm" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.584406 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.593452 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.601724 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.602838 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.604801 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.606443 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-85rtk" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.635236 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.636349 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.638209 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvfcz\" (UniqueName: \"kubernetes.io/projected/358815d3-7542-429d-bfa0-742e75ada2f6-kube-api-access-dvfcz\") pod \"manila-operator-controller-manager-7dd968899f-pq8r4\" (UID: \"358815d3-7542-429d-bfa0-742e75ada2f6\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-pq8r4" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.638271 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqfkl\" (UniqueName: \"kubernetes.io/projected/bb23ce65-61d9-4868-8008-7582ded2bff2-kube-api-access-kqfkl\") pod \"mariadb-operator-controller-manager-67bf948998-dg5kf\" (UID: \"bb23ce65-61d9-4868-8008-7582ded2bff2\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dg5kf" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.638391 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j65wb\" (UniqueName: \"kubernetes.io/projected/348e067e-1b54-43e2-9c01-bf430f7a3630-kube-api-access-j65wb\") pod \"keystone-operator-controller-manager-84f48565d4-8t4nd\" (UID: \"348e067e-1b54-43e2-9c01-bf430f7a3630\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.638435 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgg2l\" (UniqueName: \"kubernetes.io/projected/c57b39e7-275d-4ef2-af51-3e0b014182ee-kube-api-access-hgg2l\") pod \"neutron-operator-controller-manager-585dbc889-zbdxv\" (UID: \"c57b39e7-275d-4ef2-af51-3e0b014182ee\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.652184 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.665876 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.675083 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.676080 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.689007 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.690225 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.701539 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.716861 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.716957 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fczp5"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.718029 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fczp5" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.730759 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.731834 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.739495 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fczp5"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.740796 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqfkl\" (UniqueName: \"kubernetes.io/projected/bb23ce65-61d9-4868-8008-7582ded2bff2-kube-api-access-kqfkl\") pod \"mariadb-operator-controller-manager-67bf948998-dg5kf\" (UID: \"bb23ce65-61d9-4868-8008-7582ded2bff2\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dg5kf" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.740850 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d\" (UID: \"d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.740878 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gfwt\" (UniqueName: \"kubernetes.io/projected/6bf40523-2804-408c-b50d-cb04bf5b32fc-kube-api-access-9gfwt\") pod \"ovn-operator-controller-manager-788c46999f-mj7xz\" (UID: \"6bf40523-2804-408c-b50d-cb04bf5b32fc\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.740977 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg5gl\" (UniqueName: \"kubernetes.io/projected/001b758d-81ef-40e5-b53a-7c264915580d-kube-api-access-gg5gl\") pod \"octavia-operator-controller-manager-6687f8d877-qz9c2\" (UID: \"001b758d-81ef-40e5-b53a-7c264915580d\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.741023 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zffbk\" (UniqueName: \"kubernetes.io/projected/f7276bca-f319-46bf-a1b4-92a6aec8e6e6-kube-api-access-zffbk\") pod \"nova-operator-controller-manager-55bff696bd-zpdkh\" (UID: \"f7276bca-f319-46bf-a1b4-92a6aec8e6e6\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.741052 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgg2l\" (UniqueName: \"kubernetes.io/projected/c57b39e7-275d-4ef2-af51-3e0b014182ee-kube-api-access-hgg2l\") pod \"neutron-operator-controller-manager-585dbc889-zbdxv\" (UID: \"c57b39e7-275d-4ef2-af51-3e0b014182ee\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.741077 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvfcz\" (UniqueName: \"kubernetes.io/projected/358815d3-7542-429d-bfa0-742e75ada2f6-kube-api-access-dvfcz\") pod \"manila-operator-controller-manager-7dd968899f-pq8r4\" (UID: \"358815d3-7542-429d-bfa0-742e75ada2f6\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-pq8r4" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.741101 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9h7x\" (UniqueName: \"kubernetes.io/projected/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-kube-api-access-k9h7x\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d\" (UID: \"d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.762840 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.772302 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gxq7x"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.773338 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-gxq7x" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.785842 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gxq7x"] Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.835714 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-5kk25" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.839740 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-tgpkl" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.845070 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d\" (UID: \"d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.845130 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gfwt\" (UniqueName: \"kubernetes.io/projected/6bf40523-2804-408c-b50d-cb04bf5b32fc-kube-api-access-9gfwt\") pod \"ovn-operator-controller-manager-788c46999f-mj7xz\" (UID: \"6bf40523-2804-408c-b50d-cb04bf5b32fc\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.845176 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjqjv\" (UniqueName: \"kubernetes.io/projected/7520cf55-cb4a-4598-80d9-499ab60f5ff1-kube-api-access-gjqjv\") pod \"test-operator-controller-manager-56f8bfcd9f-4zrlz\" (UID: \"7520cf55-cb4a-4598-80d9-499ab60f5ff1\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.845217 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx4vj\" (UniqueName: \"kubernetes.io/projected/853077df-3183-4811-8554-5940dc41912e-kube-api-access-xx4vj\") pod \"telemetry-operator-controller-manager-64b5b76f97-fczp5\" (UID: \"853077df-3183-4811-8554-5940dc41912e\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fczp5" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.845248 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n2rk\" (UniqueName: \"kubernetes.io/projected/bf9282d5-a557-4321-b05d-35552e124429-kube-api-access-6n2rk\") pod \"placement-operator-controller-manager-5b964cf4cd-mnp2h\" (UID: \"bf9282d5-a557-4321-b05d-35552e124429\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.845279 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg5gl\" (UniqueName: \"kubernetes.io/projected/001b758d-81ef-40e5-b53a-7c264915580d-kube-api-access-gg5gl\") pod \"octavia-operator-controller-manager-6687f8d877-qz9c2\" (UID: \"001b758d-81ef-40e5-b53a-7c264915580d\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.845333 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zffbk\" (UniqueName: \"kubernetes.io/projected/f7276bca-f319-46bf-a1b4-92a6aec8e6e6-kube-api-access-zffbk\") pod \"nova-operator-controller-manager-55bff696bd-zpdkh\" (UID: \"f7276bca-f319-46bf-a1b4-92a6aec8e6e6\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.872849 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9h7x\" (UniqueName: \"kubernetes.io/projected/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-kube-api-access-k9h7x\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d\" (UID: \"d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.873153 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmn7x\" (UniqueName: \"kubernetes.io/projected/c268affd-83d0-4313-a5ba-ee20846ad416-kube-api-access-tmn7x\") pod \"swift-operator-controller-manager-68fc8c869-pntdq\" (UID: \"c268affd-83d0-4313-a5ba-ee20846ad416\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq" Jan 29 08:57:14 crc kubenswrapper[4895]: E0129 08:57:14.874257 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:57:14 crc kubenswrapper[4895]: E0129 08:57:14.874334 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert podName:d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8 nodeName:}" failed. No retries permitted until 2026-01-29 08:57:15.37431116 +0000 UTC m=+977.015819306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" (UID: "d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.875740 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6rpjd" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.907027 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jb4x7" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.908063 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-kgdct" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.940370 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-5jvcn" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.950083 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvfcz\" (UniqueName: \"kubernetes.io/projected/358815d3-7542-429d-bfa0-742e75ada2f6-kube-api-access-dvfcz\") pod \"manila-operator-controller-manager-7dd968899f-pq8r4\" (UID: \"358815d3-7542-429d-bfa0-742e75ada2f6\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-pq8r4" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.958179 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgg2l\" (UniqueName: \"kubernetes.io/projected/c57b39e7-275d-4ef2-af51-3e0b014182ee-kube-api-access-hgg2l\") pod \"neutron-operator-controller-manager-585dbc889-zbdxv\" (UID: \"c57b39e7-275d-4ef2-af51-3e0b014182ee\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.958313 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqfkl\" (UniqueName: \"kubernetes.io/projected/bb23ce65-61d9-4868-8008-7582ded2bff2-kube-api-access-kqfkl\") pod \"mariadb-operator-controller-manager-67bf948998-dg5kf\" (UID: \"bb23ce65-61d9-4868-8008-7582ded2bff2\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dg5kf" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.964077 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gfwt\" (UniqueName: \"kubernetes.io/projected/6bf40523-2804-408c-b50d-cb04bf5b32fc-kube-api-access-9gfwt\") pod \"ovn-operator-controller-manager-788c46999f-mj7xz\" (UID: \"6bf40523-2804-408c-b50d-cb04bf5b32fc\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.964822 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9h7x\" (UniqueName: \"kubernetes.io/projected/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-kube-api-access-k9h7x\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d\" (UID: \"d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.970597 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j65wb\" (UniqueName: \"kubernetes.io/projected/348e067e-1b54-43e2-9c01-bf430f7a3630-kube-api-access-j65wb\") pod \"keystone-operator-controller-manager-84f48565d4-8t4nd\" (UID: \"348e067e-1b54-43e2-9c01-bf430f7a3630\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.971902 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg5gl\" (UniqueName: \"kubernetes.io/projected/001b758d-81ef-40e5-b53a-7c264915580d-kube-api-access-gg5gl\") pod \"octavia-operator-controller-manager-6687f8d877-qz9c2\" (UID: \"001b758d-81ef-40e5-b53a-7c264915580d\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2" Jan 29 08:57:14 crc kubenswrapper[4895]: I0129 08:57:14.982189 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zffbk\" (UniqueName: \"kubernetes.io/projected/f7276bca-f319-46bf-a1b4-92a6aec8e6e6-kube-api-access-zffbk\") pod \"nova-operator-controller-manager-55bff696bd-zpdkh\" (UID: \"f7276bca-f319-46bf-a1b4-92a6aec8e6e6\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.009840 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.010848 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjqjv\" (UniqueName: \"kubernetes.io/projected/7520cf55-cb4a-4598-80d9-499ab60f5ff1-kube-api-access-gjqjv\") pod \"test-operator-controller-manager-56f8bfcd9f-4zrlz\" (UID: \"7520cf55-cb4a-4598-80d9-499ab60f5ff1\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.010885 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert\") pod \"infra-operator-controller-manager-79955696d6-tptkw\" (UID: \"cbca22f6-6189-4f59-b9bd-832466c437d1\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.010931 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx4vj\" (UniqueName: \"kubernetes.io/projected/853077df-3183-4811-8554-5940dc41912e-kube-api-access-xx4vj\") pod \"telemetry-operator-controller-manager-64b5b76f97-fczp5\" (UID: \"853077df-3183-4811-8554-5940dc41912e\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fczp5" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.010972 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n2rk\" (UniqueName: \"kubernetes.io/projected/bf9282d5-a557-4321-b05d-35552e124429-kube-api-access-6n2rk\") pod \"placement-operator-controller-manager-5b964cf4cd-mnp2h\" (UID: \"bf9282d5-a557-4321-b05d-35552e124429\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.011129 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tr6d\" (UniqueName: \"kubernetes.io/projected/7573c3c1-4b9d-4175-beef-8a4d0c604b6a-kube-api-access-2tr6d\") pod \"watcher-operator-controller-manager-564965969-gxq7x\" (UID: \"7573c3c1-4b9d-4175-beef-8a4d0c604b6a\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gxq7x" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.011202 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmn7x\" (UniqueName: \"kubernetes.io/projected/c268affd-83d0-4313-a5ba-ee20846ad416-kube-api-access-tmn7x\") pod \"swift-operator-controller-manager-68fc8c869-pntdq\" (UID: \"c268affd-83d0-4313-a5ba-ee20846ad416\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq" Jan 29 08:57:15 crc kubenswrapper[4895]: E0129 08:57:15.012965 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 08:57:15 crc kubenswrapper[4895]: E0129 08:57:15.013101 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert podName:cbca22f6-6189-4f59-b9bd-832466c437d1 nodeName:}" failed. No retries permitted until 2026-01-29 08:57:16.013071038 +0000 UTC m=+977.654579184 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert") pod "infra-operator-controller-manager-79955696d6-tptkw" (UID: "cbca22f6-6189-4f59-b9bd-832466c437d1") : secret "infra-operator-webhook-server-cert" not found Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.128065 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx4vj\" (UniqueName: \"kubernetes.io/projected/853077df-3183-4811-8554-5940dc41912e-kube-api-access-xx4vj\") pod \"telemetry-operator-controller-manager-64b5b76f97-fczp5\" (UID: \"853077df-3183-4811-8554-5940dc41912e\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fczp5" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.170358 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjqjv\" (UniqueName: \"kubernetes.io/projected/7520cf55-cb4a-4598-80d9-499ab60f5ff1-kube-api-access-gjqjv\") pod \"test-operator-controller-manager-56f8bfcd9f-4zrlz\" (UID: \"7520cf55-cb4a-4598-80d9-499ab60f5ff1\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.172018 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tr6d\" (UniqueName: \"kubernetes.io/projected/7573c3c1-4b9d-4175-beef-8a4d0c604b6a-kube-api-access-2tr6d\") pod \"watcher-operator-controller-manager-564965969-gxq7x\" (UID: \"7573c3c1-4b9d-4175-beef-8a4d0c604b6a\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gxq7x" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.181874 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n2rk\" (UniqueName: \"kubernetes.io/projected/bf9282d5-a557-4321-b05d-35552e124429-kube-api-access-6n2rk\") pod \"placement-operator-controller-manager-5b964cf4cd-mnp2h\" (UID: \"bf9282d5-a557-4321-b05d-35552e124429\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.184893 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmn7x\" (UniqueName: \"kubernetes.io/projected/c268affd-83d0-4313-a5ba-ee20846ad416-kube-api-access-tmn7x\") pod \"swift-operator-controller-manager-68fc8c869-pntdq\" (UID: \"c268affd-83d0-4313-a5ba-ee20846ad416\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.299269 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-pq8r4" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.300551 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.300879 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dg5kf" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.311899 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.313018 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.313710 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.314105 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.314409 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.354990 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tr6d\" (UniqueName: \"kubernetes.io/projected/7573c3c1-4b9d-4175-beef-8a4d0c604b6a-kube-api-access-2tr6d\") pod \"watcher-operator-controller-manager-564965969-gxq7x\" (UID: \"7573c3c1-4b9d-4175-beef-8a4d0c604b6a\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gxq7x" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.356087 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fczp5" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.361050 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.370351 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr"] Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.371862 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.375220 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.375506 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.375642 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qxxh6" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.378279 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr"] Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.397140 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-gxq7x" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.397434 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2"] Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.398628 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.407806 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d\" (UID: \"d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:57:15 crc kubenswrapper[4895]: E0129 08:57:15.408161 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:57:15 crc kubenswrapper[4895]: E0129 08:57:15.408245 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert podName:d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8 nodeName:}" failed. No retries permitted until 2026-01-29 08:57:16.408217929 +0000 UTC m=+978.049726075 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" (UID: "d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.408806 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2"] Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.449585 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-lwrd8" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.511163 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lgch\" (UniqueName: \"kubernetes.io/projected/1c9af700-ef2b-4d02-a76f-77d31d981a5f-kube-api-access-7lgch\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fh6n2\" (UID: \"1c9af700-ef2b-4d02-a76f-77d31d981a5f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.511274 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.511472 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.511506 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2snbb\" (UniqueName: \"kubernetes.io/projected/22d12b29-fd4e-4aa2-9081-a79a3a539dab-kube-api-access-2snbb\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.612540 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lgch\" (UniqueName: \"kubernetes.io/projected/1c9af700-ef2b-4d02-a76f-77d31d981a5f-kube-api-access-7lgch\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fh6n2\" (UID: \"1c9af700-ef2b-4d02-a76f-77d31d981a5f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.612592 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.612662 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.612685 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snbb\" (UniqueName: \"kubernetes.io/projected/22d12b29-fd4e-4aa2-9081-a79a3a539dab-kube-api-access-2snbb\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:15 crc kubenswrapper[4895]: E0129 08:57:15.613202 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 08:57:15 crc kubenswrapper[4895]: E0129 08:57:15.613253 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs podName:22d12b29-fd4e-4aa2-9081-a79a3a539dab nodeName:}" failed. No retries permitted until 2026-01-29 08:57:16.113236779 +0000 UTC m=+977.754744925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs") pod "openstack-operator-controller-manager-569b5dc57f-cn6fr" (UID: "22d12b29-fd4e-4aa2-9081-a79a3a539dab") : secret "metrics-server-cert" not found Jan 29 08:57:15 crc kubenswrapper[4895]: E0129 08:57:15.613400 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 08:57:15 crc kubenswrapper[4895]: E0129 08:57:15.613427 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs podName:22d12b29-fd4e-4aa2-9081-a79a3a539dab nodeName:}" failed. No retries permitted until 2026-01-29 08:57:16.113420824 +0000 UTC m=+977.754928970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs") pod "openstack-operator-controller-manager-569b5dc57f-cn6fr" (UID: "22d12b29-fd4e-4aa2-9081-a79a3a539dab") : secret "webhook-server-cert" not found Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.648348 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2snbb\" (UniqueName: \"kubernetes.io/projected/22d12b29-fd4e-4aa2-9081-a79a3a539dab-kube-api-access-2snbb\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.723196 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lgch\" (UniqueName: \"kubernetes.io/projected/1c9af700-ef2b-4d02-a76f-77d31d981a5f-kube-api-access-7lgch\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fh6n2\" (UID: \"1c9af700-ef2b-4d02-a76f-77d31d981a5f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2" Jan 29 08:57:15 crc kubenswrapper[4895]: I0129 08:57:15.992777 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2" Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.036447 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.036536 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.036612 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.037887 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e283faf84652d2e1164b1f178cfd437682bdf8b7e6ce6e055041db42bca73378"} pod="openshift-machine-config-operator/machine-config-daemon-z82hk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.037988 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" containerID="cri-o://e283faf84652d2e1164b1f178cfd437682bdf8b7e6ce6e055041db42bca73378" gracePeriod=600 Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.114593 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-6cz2h"] Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.119628 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d"] Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.127979 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.128106 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert\") pod \"infra-operator-controller-manager-79955696d6-tptkw\" (UID: \"cbca22f6-6189-4f59-b9bd-832466c437d1\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.128138 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:16 crc kubenswrapper[4895]: E0129 08:57:16.128316 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 08:57:16 crc kubenswrapper[4895]: E0129 08:57:16.128378 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs podName:22d12b29-fd4e-4aa2-9081-a79a3a539dab nodeName:}" failed. No retries permitted until 2026-01-29 08:57:17.128361096 +0000 UTC m=+978.769869242 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs") pod "openstack-operator-controller-manager-569b5dc57f-cn6fr" (UID: "22d12b29-fd4e-4aa2-9081-a79a3a539dab") : secret "webhook-server-cert" not found Jan 29 08:57:16 crc kubenswrapper[4895]: E0129 08:57:16.129736 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 08:57:16 crc kubenswrapper[4895]: E0129 08:57:16.130861 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 08:57:16 crc kubenswrapper[4895]: E0129 08:57:16.130987 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert podName:cbca22f6-6189-4f59-b9bd-832466c437d1 nodeName:}" failed. No retries permitted until 2026-01-29 08:57:18.130965196 +0000 UTC m=+979.772473352 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert") pod "infra-operator-controller-manager-79955696d6-tptkw" (UID: "cbca22f6-6189-4f59-b9bd-832466c437d1") : secret "infra-operator-webhook-server-cert" not found Jan 29 08:57:16 crc kubenswrapper[4895]: E0129 08:57:16.131033 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs podName:22d12b29-fd4e-4aa2-9081-a79a3a539dab nodeName:}" failed. No retries permitted until 2026-01-29 08:57:17.131023497 +0000 UTC m=+978.772531643 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs") pod "openstack-operator-controller-manager-569b5dc57f-cn6fr" (UID: "22d12b29-fd4e-4aa2-9081-a79a3a539dab") : secret "metrics-server-cert" not found Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.258279 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l"] Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.281315 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss"] Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.442776 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d\" (UID: \"d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:57:16 crc kubenswrapper[4895]: E0129 08:57:16.443345 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:57:16 crc kubenswrapper[4895]: E0129 08:57:16.443412 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert podName:d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8 nodeName:}" failed. No retries permitted until 2026-01-29 08:57:18.443392906 +0000 UTC m=+980.084901052 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" (UID: "d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.970943 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l" event={"ID":"e97a1d25-e9ba-4ce2-b172-035afb18721b","Type":"ContainerStarted","Data":"7f55580a23198c2c1c253211b6b00519c716c0be70b5f87b28b1fcf702404f10"} Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.978477 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6cz2h" event={"ID":"d4d2a9b0-6258-4257-9824-74abbbc40b24","Type":"ContainerStarted","Data":"043294d86fee83724bd8480811d7d5c81fa522ac8c799f4ee1b212b3fcbe03b1"} Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.980324 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d" event={"ID":"bc16fc79-c074-4969-af29-c46fdd06f9f8","Type":"ContainerStarted","Data":"dde73d28d18f4cf217b4dbc38b3d98563c636b2251623fc6b85b0e839fba21c8"} Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.981337 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss" event={"ID":"ad4a2a80-d64f-45b0-bea8-dc2e5c7ea050","Type":"ContainerStarted","Data":"b5194fd14ac683871503fde75de97decbbdada19c110d5c0c3e3e8836b2cb876"} Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.983678 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerID="e283faf84652d2e1164b1f178cfd437682bdf8b7e6ce6e055041db42bca73378" exitCode=0 Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.983719 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerDied","Data":"e283faf84652d2e1164b1f178cfd437682bdf8b7e6ce6e055041db42bca73378"} Jan 29 08:57:16 crc kubenswrapper[4895]: I0129 08:57:16.983784 4895 scope.go:117] "RemoveContainer" containerID="196dd09f37b20983a231714c51e3920c9238c0dcfbe938ccc9dfef7054a9c34d" Jan 29 08:57:17 crc kubenswrapper[4895]: I0129 08:57:17.147032 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:17 crc kubenswrapper[4895]: I0129 08:57:17.147225 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:17 crc kubenswrapper[4895]: E0129 08:57:17.147603 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 08:57:17 crc kubenswrapper[4895]: E0129 08:57:17.147722 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs podName:22d12b29-fd4e-4aa2-9081-a79a3a539dab nodeName:}" failed. No retries permitted until 2026-01-29 08:57:19.14770038 +0000 UTC m=+980.789208526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs") pod "openstack-operator-controller-manager-569b5dc57f-cn6fr" (UID: "22d12b29-fd4e-4aa2-9081-a79a3a539dab") : secret "webhook-server-cert" not found Jan 29 08:57:17 crc kubenswrapper[4895]: E0129 08:57:17.148434 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 08:57:17 crc kubenswrapper[4895]: E0129 08:57:17.148620 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs podName:22d12b29-fd4e-4aa2-9081-a79a3a539dab nodeName:}" failed. No retries permitted until 2026-01-29 08:57:19.148573302 +0000 UTC m=+980.790081448 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs") pod "openstack-operator-controller-manager-569b5dc57f-cn6fr" (UID: "22d12b29-fd4e-4aa2-9081-a79a3a539dab") : secret "metrics-server-cert" not found Jan 29 08:57:17 crc kubenswrapper[4895]: I0129 08:57:17.952130 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2"] Jan 29 08:57:17 crc kubenswrapper[4895]: I0129 08:57:17.973964 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb"] Jan 29 08:57:17 crc kubenswrapper[4895]: I0129 08:57:17.984632 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-dg5kf"] Jan 29 08:57:17 crc kubenswrapper[4895]: I0129 08:57:17.992642 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj"] Jan 29 08:57:17 crc kubenswrapper[4895]: I0129 08:57:17.996983 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gxq7x"] Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.001153 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-pq8r4"] Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.005373 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd"] Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.009527 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-sdkzk"] Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.014205 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fczp5"] Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.019953 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv"] Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.023757 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq"] Jan 29 08:57:18 crc kubenswrapper[4895]: W0129 08:57:18.042013 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e73fff0_3497_4937_bfe0_10bea87ddeb3.slice/crio-fc030eb5ed30c1ae1656e77e0cc3593e9a4b8b4539039ff152898848adafdcd6 WatchSource:0}: Error finding container fc030eb5ed30c1ae1656e77e0cc3593e9a4b8b4539039ff152898848adafdcd6: Status 404 returned error can't find the container with id fc030eb5ed30c1ae1656e77e0cc3593e9a4b8b4539039ff152898848adafdcd6 Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.048671 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb" event={"ID":"baa89b4d-cf32-498b-a624-585afea7f964","Type":"ContainerStarted","Data":"de07479c512490a2993aebbe584d3eec0c3f2b2ab277abbe8788844778b604dd"} Jan 29 08:57:18 crc kubenswrapper[4895]: W0129 08:57:18.052799 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7573c3c1_4b9d_4175_beef_8a4d0c604b6a.slice/crio-bdbccc83a0b6d1e6d90cbd5e693c83f97035f7cee97d53ee64f26c25af8822d3 WatchSource:0}: Error finding container bdbccc83a0b6d1e6d90cbd5e693c83f97035f7cee97d53ee64f26c25af8822d3: Status 404 returned error can't find the container with id bdbccc83a0b6d1e6d90cbd5e693c83f97035f7cee97d53ee64f26c25af8822d3 Jan 29 08:57:18 crc kubenswrapper[4895]: W0129 08:57:18.057714 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc268affd_83d0_4313_a5ba_ee20846ad416.slice/crio-9c3ac826c8e17c1aa34410a7eb5277fd2f10f7b35fd4a17a9c80b3366b00c358 WatchSource:0}: Error finding container 9c3ac826c8e17c1aa34410a7eb5277fd2f10f7b35fd4a17a9c80b3366b00c358: Status 404 returned error can't find the container with id 9c3ac826c8e17c1aa34410a7eb5277fd2f10f7b35fd4a17a9c80b3366b00c358 Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.059409 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerStarted","Data":"ac19f0c558f19013faadf18c0d93d61660767ea0e756e78bbf7d902981654a13"} Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.062209 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2" event={"ID":"001b758d-81ef-40e5-b53a-7c264915580d","Type":"ContainerStarted","Data":"58325ca4f90989629a191327d7de030e3bec57fed4f09549b56971ee79d0ad8a"} Jan 29 08:57:18 crc kubenswrapper[4895]: W0129 08:57:18.064477 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod853077df_3183_4811_8554_5940dc41912e.slice/crio-4b0ea3147b954e07e8405f9250c04cfd666c69607701a4d9f9636f8259724a24 WatchSource:0}: Error finding container 4b0ea3147b954e07e8405f9250c04cfd666c69607701a4d9f9636f8259724a24: Status 404 returned error can't find the container with id 4b0ea3147b954e07e8405f9250c04cfd666c69607701a4d9f9636f8259724a24 Jan 29 08:57:18 crc kubenswrapper[4895]: W0129 08:57:18.073191 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc57b39e7_275d_4ef2_af51_3e0b014182ee.slice/crio-6a8912b7d65e5213108aaab36c5955361fa4226f968f0521812742d20e4731ef WatchSource:0}: Error finding container 6a8912b7d65e5213108aaab36c5955361fa4226f968f0521812742d20e4731ef: Status 404 returned error can't find the container with id 6a8912b7d65e5213108aaab36c5955361fa4226f968f0521812742d20e4731ef Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.197600 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert\") pod \"infra-operator-controller-manager-79955696d6-tptkw\" (UID: \"cbca22f6-6189-4f59-b9bd-832466c437d1\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:57:18 crc kubenswrapper[4895]: E0129 08:57:18.198194 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 08:57:18 crc kubenswrapper[4895]: E0129 08:57:18.198319 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert podName:cbca22f6-6189-4f59-b9bd-832466c437d1 nodeName:}" failed. No retries permitted until 2026-01-29 08:57:22.198287038 +0000 UTC m=+983.839795184 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert") pod "infra-operator-controller-manager-79955696d6-tptkw" (UID: "cbca22f6-6189-4f59-b9bd-832466c437d1") : secret "infra-operator-webhook-server-cert" not found Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.285867 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz"] Jan 29 08:57:18 crc kubenswrapper[4895]: W0129 08:57:18.306616 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf9282d5_a557_4321_b05d_35552e124429.slice/crio-8418f4b423584444fad3bba5a8626f66830546da57b603caebc264b19cd024f5 WatchSource:0}: Error finding container 8418f4b423584444fad3bba5a8626f66830546da57b603caebc264b19cd024f5: Status 404 returned error can't find the container with id 8418f4b423584444fad3bba5a8626f66830546da57b603caebc264b19cd024f5 Jan 29 08:57:18 crc kubenswrapper[4895]: E0129 08:57:18.314627 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7lgch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-fh6n2_openstack-operators(1c9af700-ef2b-4d02-a76f-77d31d981a5f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 08:57:18 crc kubenswrapper[4895]: E0129 08:57:18.319375 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2" podUID="1c9af700-ef2b-4d02-a76f-77d31d981a5f" Jan 29 08:57:18 crc kubenswrapper[4895]: E0129 08:57:18.319698 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6n2rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-mnp2h_openstack-operators(bf9282d5-a557-4321-b05d-35552e124429): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 08:57:18 crc kubenswrapper[4895]: E0129 08:57:18.321102 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" podUID="bf9282d5-a557-4321-b05d-35552e124429" Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.322234 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h"] Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.334470 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2"] Jan 29 08:57:18 crc kubenswrapper[4895]: E0129 08:57:18.340565 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9gfwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-mj7xz_openstack-operators(6bf40523-2804-408c-b50d-cb04bf5b32fc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 08:57:18 crc kubenswrapper[4895]: E0129 08:57:18.340785 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zffbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-zpdkh_openstack-operators(f7276bca-f319-46bf-a1b4-92a6aec8e6e6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 08:57:18 crc kubenswrapper[4895]: E0129 08:57:18.345320 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" podUID="f7276bca-f319-46bf-a1b4-92a6aec8e6e6" Jan 29 08:57:18 crc kubenswrapper[4895]: E0129 08:57:18.345416 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" podUID="6bf40523-2804-408c-b50d-cb04bf5b32fc" Jan 29 08:57:18 crc kubenswrapper[4895]: E0129 08:57:18.350389 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gjqjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-4zrlz_openstack-operators(7520cf55-cb4a-4598-80d9-499ab60f5ff1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 08:57:18 crc kubenswrapper[4895]: E0129 08:57:18.352095 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" podUID="7520cf55-cb4a-4598-80d9-499ab60f5ff1" Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.385328 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh"] Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.390755 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz"] Jan 29 08:57:18 crc kubenswrapper[4895]: I0129 08:57:18.505947 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d\" (UID: \"d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:57:18 crc kubenswrapper[4895]: E0129 08:57:18.506276 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:57:18 crc kubenswrapper[4895]: E0129 08:57:18.506356 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert podName:d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8 nodeName:}" failed. No retries permitted until 2026-01-29 08:57:22.506328 +0000 UTC m=+984.147836146 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" (UID: "d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.076971 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq" event={"ID":"c268affd-83d0-4313-a5ba-ee20846ad416","Type":"ContainerStarted","Data":"9c3ac826c8e17c1aa34410a7eb5277fd2f10f7b35fd4a17a9c80b3366b00c358"} Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.079715 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd" event={"ID":"348e067e-1b54-43e2-9c01-bf430f7a3630","Type":"ContainerStarted","Data":"700b5931959627fef579b4861158566be67c4d83e39f84a0a0668e933994c0eb"} Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.084524 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" event={"ID":"7520cf55-cb4a-4598-80d9-499ab60f5ff1","Type":"ContainerStarted","Data":"546d57016abaceece021ff9e8e729a04b2d47f932adcad73608a501e72f54721"} Jan 29 08:57:19 crc kubenswrapper[4895]: E0129 08:57:19.094585 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" podUID="7520cf55-cb4a-4598-80d9-499ab60f5ff1" Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.094970 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj" event={"ID":"b2dd46da-1ebf-489f-8467-eab7fc206736","Type":"ContainerStarted","Data":"af56d8cc3b14d33a402dfe118738ab2b4e25119b9de0647f421959c97448e1a7"} Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.120147 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sdkzk" event={"ID":"5e73fff0-3497-4937-bfe0-10bea87ddeb3","Type":"ContainerStarted","Data":"fc030eb5ed30c1ae1656e77e0cc3593e9a4b8b4539039ff152898848adafdcd6"} Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.128115 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-pq8r4" event={"ID":"358815d3-7542-429d-bfa0-742e75ada2f6","Type":"ContainerStarted","Data":"0834a857bbeee8e545f676847d94f4b19af81b9dc9e6ba2a125ab6fb628574dd"} Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.130006 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dg5kf" event={"ID":"bb23ce65-61d9-4868-8008-7582ded2bff2","Type":"ContainerStarted","Data":"f71a8da164f6e1f79d80c10639ebc6f6aa59045ba1ee1eccfb8d8b9fea5791e3"} Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.135568 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" event={"ID":"6bf40523-2804-408c-b50d-cb04bf5b32fc","Type":"ContainerStarted","Data":"45433c42b43b8ada06b4180345d8fd379c473e2555a0a3920b6ceaa088ec374f"} Jan 29 08:57:19 crc kubenswrapper[4895]: E0129 08:57:19.137802 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" podUID="6bf40523-2804-408c-b50d-cb04bf5b32fc" Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.139758 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" event={"ID":"f7276bca-f319-46bf-a1b4-92a6aec8e6e6","Type":"ContainerStarted","Data":"0a3b65a1404507a45f292a37a40319562dedd88dec58e59b00c9084ceeb55616"} Jan 29 08:57:19 crc kubenswrapper[4895]: E0129 08:57:19.144017 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" podUID="f7276bca-f319-46bf-a1b4-92a6aec8e6e6" Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.147386 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv" event={"ID":"c57b39e7-275d-4ef2-af51-3e0b014182ee","Type":"ContainerStarted","Data":"6a8912b7d65e5213108aaab36c5955361fa4226f968f0521812742d20e4731ef"} Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.161421 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fczp5" event={"ID":"853077df-3183-4811-8554-5940dc41912e","Type":"ContainerStarted","Data":"4b0ea3147b954e07e8405f9250c04cfd666c69607701a4d9f9636f8259724a24"} Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.173670 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-gxq7x" event={"ID":"7573c3c1-4b9d-4175-beef-8a4d0c604b6a","Type":"ContainerStarted","Data":"bdbccc83a0b6d1e6d90cbd5e693c83f97035f7cee97d53ee64f26c25af8822d3"} Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.178956 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2" event={"ID":"1c9af700-ef2b-4d02-a76f-77d31d981a5f","Type":"ContainerStarted","Data":"569b7152d085a358e76d2d2b464a7ea959f281b14f35babb78c391e9baed18f9"} Jan 29 08:57:19 crc kubenswrapper[4895]: E0129 08:57:19.182330 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2" podUID="1c9af700-ef2b-4d02-a76f-77d31d981a5f" Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.184045 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" event={"ID":"bf9282d5-a557-4321-b05d-35552e124429","Type":"ContainerStarted","Data":"8418f4b423584444fad3bba5a8626f66830546da57b603caebc264b19cd024f5"} Jan 29 08:57:19 crc kubenswrapper[4895]: E0129 08:57:19.194536 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" podUID="bf9282d5-a557-4321-b05d-35552e124429" Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.223162 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:19 crc kubenswrapper[4895]: I0129 08:57:19.223317 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:19 crc kubenswrapper[4895]: E0129 08:57:19.223444 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 08:57:19 crc kubenswrapper[4895]: E0129 08:57:19.223564 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs podName:22d12b29-fd4e-4aa2-9081-a79a3a539dab nodeName:}" failed. No retries permitted until 2026-01-29 08:57:23.223531368 +0000 UTC m=+984.865039694 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs") pod "openstack-operator-controller-manager-569b5dc57f-cn6fr" (UID: "22d12b29-fd4e-4aa2-9081-a79a3a539dab") : secret "webhook-server-cert" not found Jan 29 08:57:19 crc kubenswrapper[4895]: E0129 08:57:19.224234 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 08:57:19 crc kubenswrapper[4895]: E0129 08:57:19.224294 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs podName:22d12b29-fd4e-4aa2-9081-a79a3a539dab nodeName:}" failed. No retries permitted until 2026-01-29 08:57:23.224283519 +0000 UTC m=+984.865791855 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs") pod "openstack-operator-controller-manager-569b5dc57f-cn6fr" (UID: "22d12b29-fd4e-4aa2-9081-a79a3a539dab") : secret "metrics-server-cert" not found Jan 29 08:57:20 crc kubenswrapper[4895]: E0129 08:57:20.216093 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" podUID="bf9282d5-a557-4321-b05d-35552e124429" Jan 29 08:57:20 crc kubenswrapper[4895]: E0129 08:57:20.222677 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" podUID="7520cf55-cb4a-4598-80d9-499ab60f5ff1" Jan 29 08:57:20 crc kubenswrapper[4895]: E0129 08:57:20.222722 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" podUID="6bf40523-2804-408c-b50d-cb04bf5b32fc" Jan 29 08:57:20 crc kubenswrapper[4895]: E0129 08:57:20.222681 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" podUID="f7276bca-f319-46bf-a1b4-92a6aec8e6e6" Jan 29 08:57:20 crc kubenswrapper[4895]: E0129 08:57:20.224271 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2" podUID="1c9af700-ef2b-4d02-a76f-77d31d981a5f" Jan 29 08:57:22 crc kubenswrapper[4895]: I0129 08:57:22.265685 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert\") pod \"infra-operator-controller-manager-79955696d6-tptkw\" (UID: \"cbca22f6-6189-4f59-b9bd-832466c437d1\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:57:22 crc kubenswrapper[4895]: E0129 08:57:22.266150 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 08:57:22 crc kubenswrapper[4895]: E0129 08:57:22.266243 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert podName:cbca22f6-6189-4f59-b9bd-832466c437d1 nodeName:}" failed. No retries permitted until 2026-01-29 08:57:30.26621775 +0000 UTC m=+991.907725886 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert") pod "infra-operator-controller-manager-79955696d6-tptkw" (UID: "cbca22f6-6189-4f59-b9bd-832466c437d1") : secret "infra-operator-webhook-server-cert" not found Jan 29 08:57:22 crc kubenswrapper[4895]: I0129 08:57:22.519765 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d\" (UID: \"d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:57:22 crc kubenswrapper[4895]: E0129 08:57:22.519953 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:57:22 crc kubenswrapper[4895]: E0129 08:57:22.520001 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert podName:d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8 nodeName:}" failed. No retries permitted until 2026-01-29 08:57:30.519987202 +0000 UTC m=+992.161495348 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" (UID: "d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 08:57:23 crc kubenswrapper[4895]: I0129 08:57:23.246852 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:23 crc kubenswrapper[4895]: E0129 08:57:23.247096 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 08:57:23 crc kubenswrapper[4895]: I0129 08:57:23.247458 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:23 crc kubenswrapper[4895]: E0129 08:57:23.247467 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs podName:22d12b29-fd4e-4aa2-9081-a79a3a539dab nodeName:}" failed. No retries permitted until 2026-01-29 08:57:31.247440454 +0000 UTC m=+992.888948600 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs") pod "openstack-operator-controller-manager-569b5dc57f-cn6fr" (UID: "22d12b29-fd4e-4aa2-9081-a79a3a539dab") : secret "webhook-server-cert" not found Jan 29 08:57:23 crc kubenswrapper[4895]: E0129 08:57:23.247858 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 08:57:23 crc kubenswrapper[4895]: E0129 08:57:23.248194 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs podName:22d12b29-fd4e-4aa2-9081-a79a3a539dab nodeName:}" failed. No retries permitted until 2026-01-29 08:57:31.248170874 +0000 UTC m=+992.889679020 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs") pod "openstack-operator-controller-manager-569b5dc57f-cn6fr" (UID: "22d12b29-fd4e-4aa2-9081-a79a3a539dab") : secret "metrics-server-cert" not found Jan 29 08:57:30 crc kubenswrapper[4895]: I0129 08:57:30.333494 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert\") pod \"infra-operator-controller-manager-79955696d6-tptkw\" (UID: \"cbca22f6-6189-4f59-b9bd-832466c437d1\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:57:30 crc kubenswrapper[4895]: I0129 08:57:30.342844 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbca22f6-6189-4f59-b9bd-832466c437d1-cert\") pod \"infra-operator-controller-manager-79955696d6-tptkw\" (UID: \"cbca22f6-6189-4f59-b9bd-832466c437d1\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:57:30 crc kubenswrapper[4895]: I0129 08:57:30.399142 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:57:30 crc kubenswrapper[4895]: I0129 08:57:30.537010 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d\" (UID: \"d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:57:30 crc kubenswrapper[4895]: I0129 08:57:30.541431 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d\" (UID: \"d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:57:30 crc kubenswrapper[4895]: I0129 08:57:30.575669 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:57:31 crc kubenswrapper[4895]: E0129 08:57:31.122834 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 29 08:57:31 crc kubenswrapper[4895]: E0129 08:57:31.123616 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h456t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-9dpss_openstack-operators(ad4a2a80-d64f-45b0-bea8-dc2e5c7ea050): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:31 crc kubenswrapper[4895]: E0129 08:57:31.124741 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss" podUID="ad4a2a80-d64f-45b0-bea8-dc2e5c7ea050" Jan 29 08:57:31 crc kubenswrapper[4895]: I0129 08:57:31.257704 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:31 crc kubenswrapper[4895]: I0129 08:57:31.257852 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:31 crc kubenswrapper[4895]: E0129 08:57:31.258060 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 08:57:31 crc kubenswrapper[4895]: E0129 08:57:31.258159 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs podName:22d12b29-fd4e-4aa2-9081-a79a3a539dab nodeName:}" failed. No retries permitted until 2026-01-29 08:57:47.258138911 +0000 UTC m=+1008.899647057 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs") pod "openstack-operator-controller-manager-569b5dc57f-cn6fr" (UID: "22d12b29-fd4e-4aa2-9081-a79a3a539dab") : secret "webhook-server-cert" not found Jan 29 08:57:31 crc kubenswrapper[4895]: I0129 08:57:31.271564 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-metrics-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:31 crc kubenswrapper[4895]: E0129 08:57:31.839945 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss" podUID="ad4a2a80-d64f-45b0-bea8-dc2e5c7ea050" Jan 29 08:57:35 crc kubenswrapper[4895]: E0129 08:57:35.021650 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 29 08:57:35 crc kubenswrapper[4895]: E0129 08:57:35.022264 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hgg2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-zbdxv_openstack-operators(c57b39e7-275d-4ef2-af51-3e0b014182ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:35 crc kubenswrapper[4895]: E0129 08:57:35.023475 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv" podUID="c57b39e7-275d-4ef2-af51-3e0b014182ee" Jan 29 08:57:35 crc kubenswrapper[4895]: E0129 08:57:35.884472 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv" podUID="c57b39e7-275d-4ef2-af51-3e0b014182ee" Jan 29 08:57:36 crc kubenswrapper[4895]: E0129 08:57:36.680047 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 29 08:57:36 crc kubenswrapper[4895]: E0129 08:57:36.680282 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tmn7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-pntdq_openstack-operators(c268affd-83d0-4313-a5ba-ee20846ad416): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:36 crc kubenswrapper[4895]: E0129 08:57:36.681689 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq" podUID="c268affd-83d0-4313-a5ba-ee20846ad416" Jan 29 08:57:36 crc kubenswrapper[4895]: E0129 08:57:36.897320 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq" podUID="c268affd-83d0-4313-a5ba-ee20846ad416" Jan 29 08:57:37 crc kubenswrapper[4895]: E0129 08:57:37.851667 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 29 08:57:37 crc kubenswrapper[4895]: E0129 08:57:37.851870 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2tr6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-gxq7x_openstack-operators(7573c3c1-4b9d-4175-beef-8a4d0c604b6a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:37 crc kubenswrapper[4895]: E0129 08:57:37.853098 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-gxq7x" podUID="7573c3c1-4b9d-4175-beef-8a4d0c604b6a" Jan 29 08:57:37 crc kubenswrapper[4895]: E0129 08:57:37.906722 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-gxq7x" podUID="7573c3c1-4b9d-4175-beef-8a4d0c604b6a" Jan 29 08:57:38 crc kubenswrapper[4895]: I0129 08:57:38.631099 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tgs4j"] Jan 29 08:57:38 crc kubenswrapper[4895]: I0129 08:57:38.633578 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:57:38 crc kubenswrapper[4895]: I0129 08:57:38.645643 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgs4j"] Jan 29 08:57:38 crc kubenswrapper[4895]: I0129 08:57:38.782596 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94afc809-09fa-479a-b4d0-eea69e4ee4a5-utilities\") pod \"redhat-marketplace-tgs4j\" (UID: \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\") " pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:57:38 crc kubenswrapper[4895]: I0129 08:57:38.782766 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmgs2\" (UniqueName: \"kubernetes.io/projected/94afc809-09fa-479a-b4d0-eea69e4ee4a5-kube-api-access-tmgs2\") pod \"redhat-marketplace-tgs4j\" (UID: \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\") " pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:57:38 crc kubenswrapper[4895]: I0129 08:57:38.782787 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94afc809-09fa-479a-b4d0-eea69e4ee4a5-catalog-content\") pod \"redhat-marketplace-tgs4j\" (UID: \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\") " pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:57:38 crc kubenswrapper[4895]: I0129 08:57:38.884609 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94afc809-09fa-479a-b4d0-eea69e4ee4a5-utilities\") pod \"redhat-marketplace-tgs4j\" (UID: \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\") " pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:57:38 crc kubenswrapper[4895]: I0129 08:57:38.884902 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmgs2\" (UniqueName: \"kubernetes.io/projected/94afc809-09fa-479a-b4d0-eea69e4ee4a5-kube-api-access-tmgs2\") pod \"redhat-marketplace-tgs4j\" (UID: \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\") " pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:57:38 crc kubenswrapper[4895]: I0129 08:57:38.884969 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94afc809-09fa-479a-b4d0-eea69e4ee4a5-catalog-content\") pod \"redhat-marketplace-tgs4j\" (UID: \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\") " pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:57:38 crc kubenswrapper[4895]: I0129 08:57:38.885411 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94afc809-09fa-479a-b4d0-eea69e4ee4a5-utilities\") pod \"redhat-marketplace-tgs4j\" (UID: \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\") " pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:57:38 crc kubenswrapper[4895]: I0129 08:57:38.886098 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94afc809-09fa-479a-b4d0-eea69e4ee4a5-catalog-content\") pod \"redhat-marketplace-tgs4j\" (UID: \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\") " pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:57:38 crc kubenswrapper[4895]: I0129 08:57:38.946998 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmgs2\" (UniqueName: \"kubernetes.io/projected/94afc809-09fa-479a-b4d0-eea69e4ee4a5-kube-api-access-tmgs2\") pod \"redhat-marketplace-tgs4j\" (UID: \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\") " pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:57:38 crc kubenswrapper[4895]: I0129 08:57:38.961212 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:57:41 crc kubenswrapper[4895]: E0129 08:57:41.358862 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Jan 29 08:57:41 crc kubenswrapper[4895]: E0129 08:57:41.359547 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b9jpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-7hp5l_openstack-operators(e97a1d25-e9ba-4ce2-b172-035afb18721b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:41 crc kubenswrapper[4895]: E0129 08:57:41.360836 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l" podUID="e97a1d25-e9ba-4ce2-b172-035afb18721b" Jan 29 08:57:42 crc kubenswrapper[4895]: E0129 08:57:42.026586 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Jan 29 08:57:42 crc kubenswrapper[4895]: E0129 08:57:42.027244 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c7wdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-58zzj_openstack-operators(b2dd46da-1ebf-489f-8467-eab7fc206736): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:42 crc kubenswrapper[4895]: E0129 08:57:42.028505 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj" podUID="b2dd46da-1ebf-489f-8467-eab7fc206736" Jan 29 08:57:42 crc kubenswrapper[4895]: E0129 08:57:42.048629 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l" podUID="e97a1d25-e9ba-4ce2-b172-035afb18721b" Jan 29 08:57:42 crc kubenswrapper[4895]: E0129 08:57:42.052015 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj" podUID="b2dd46da-1ebf-489f-8467-eab7fc206736" Jan 29 08:57:42 crc kubenswrapper[4895]: E0129 08:57:42.900298 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c" Jan 29 08:57:42 crc kubenswrapper[4895]: E0129 08:57:42.900907 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jmlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7b6c4d8c5f-gd75d_openstack-operators(bc16fc79-c074-4969-af29-c46fdd06f9f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:42 crc kubenswrapper[4895]: E0129 08:57:42.902018 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d" podUID="bc16fc79-c074-4969-af29-c46fdd06f9f8" Jan 29 08:57:43 crc kubenswrapper[4895]: E0129 08:57:43.068971 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d" podUID="bc16fc79-c074-4969-af29-c46fdd06f9f8" Jan 29 08:57:47 crc kubenswrapper[4895]: I0129 08:57:47.351325 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:47 crc kubenswrapper[4895]: I0129 08:57:47.359353 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22d12b29-fd4e-4aa2-9081-a79a3a539dab-webhook-certs\") pod \"openstack-operator-controller-manager-569b5dc57f-cn6fr\" (UID: \"22d12b29-fd4e-4aa2-9081-a79a3a539dab\") " pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:47 crc kubenswrapper[4895]: I0129 08:57:47.459960 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:57:52 crc kubenswrapper[4895]: E0129 08:57:52.989758 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Jan 29 08:57:52 crc kubenswrapper[4895]: E0129 08:57:52.991251 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gg5gl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-qz9c2_openstack-operators(001b758d-81ef-40e5-b53a-7c264915580d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:52 crc kubenswrapper[4895]: E0129 08:57:52.992486 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2" podUID="001b758d-81ef-40e5-b53a-7c264915580d" Jan 29 08:57:53 crc kubenswrapper[4895]: E0129 08:57:53.136421 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2" podUID="001b758d-81ef-40e5-b53a-7c264915580d" Jan 29 08:57:55 crc kubenswrapper[4895]: E0129 08:57:55.238493 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 29 08:57:55 crc kubenswrapper[4895]: E0129 08:57:55.239132 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gjqjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-4zrlz_openstack-operators(7520cf55-cb4a-4598-80d9-499ab60f5ff1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:55 crc kubenswrapper[4895]: E0129 08:57:55.240308 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" podUID="7520cf55-cb4a-4598-80d9-499ab60f5ff1" Jan 29 08:57:56 crc kubenswrapper[4895]: E0129 08:57:56.176532 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 29 08:57:56 crc kubenswrapper[4895]: E0129 08:57:56.176830 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9gfwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-mj7xz_openstack-operators(6bf40523-2804-408c-b50d-cb04bf5b32fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:56 crc kubenswrapper[4895]: E0129 08:57:56.178031 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" podUID="6bf40523-2804-408c-b50d-cb04bf5b32fc" Jan 29 08:57:56 crc kubenswrapper[4895]: E0129 08:57:56.964726 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 29 08:57:56 crc kubenswrapper[4895]: E0129 08:57:56.965643 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6n2rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-mnp2h_openstack-operators(bf9282d5-a557-4321-b05d-35552e124429): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:56 crc kubenswrapper[4895]: E0129 08:57:56.968079 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" podUID="bf9282d5-a557-4321-b05d-35552e124429" Jan 29 08:57:57 crc kubenswrapper[4895]: E0129 08:57:57.067571 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.102:5001/openstack-k8s-operators/ironic-operator:b2f66955eff81fe09e65525e07d9fb3d17bb4856" Jan 29 08:57:57 crc kubenswrapper[4895]: E0129 08:57:57.067698 4895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.102:5001/openstack-k8s-operators/ironic-operator:b2f66955eff81fe09e65525e07d9fb3d17bb4856" Jan 29 08:57:57 crc kubenswrapper[4895]: E0129 08:57:57.067867 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.102:5001/openstack-k8s-operators/ironic-operator:b2f66955eff81fe09e65525e07d9fb3d17bb4856,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9pzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-54c4948594-l45qb_openstack-operators(baa89b4d-cf32-498b-a624-585afea7f964): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:57 crc kubenswrapper[4895]: E0129 08:57:57.069197 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb" podUID="baa89b4d-cf32-498b-a624-585afea7f964" Jan 29 08:57:57 crc kubenswrapper[4895]: E0129 08:57:57.169103 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.102:5001/openstack-k8s-operators/ironic-operator:b2f66955eff81fe09e65525e07d9fb3d17bb4856\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb" podUID="baa89b4d-cf32-498b-a624-585afea7f964" Jan 29 08:57:57 crc kubenswrapper[4895]: E0129 08:57:57.616269 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 29 08:57:57 crc kubenswrapper[4895]: E0129 08:57:57.616484 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j65wb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-8t4nd_openstack-operators(348e067e-1b54-43e2-9c01-bf430f7a3630): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:57 crc kubenswrapper[4895]: E0129 08:57:57.617908 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd" podUID="348e067e-1b54-43e2-9c01-bf430f7a3630" Jan 29 08:57:58 crc kubenswrapper[4895]: E0129 08:57:58.207684 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd" podUID="348e067e-1b54-43e2-9c01-bf430f7a3630" Jan 29 08:57:58 crc kubenswrapper[4895]: I0129 08:57:58.743745 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-tptkw"] Jan 29 08:57:58 crc kubenswrapper[4895]: E0129 08:57:58.856493 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 29 08:57:58 crc kubenswrapper[4895]: E0129 08:57:58.856708 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7lgch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-fh6n2_openstack-operators(1c9af700-ef2b-4d02-a76f-77d31d981a5f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:58 crc kubenswrapper[4895]: E0129 08:57:58.857954 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2" podUID="1c9af700-ef2b-4d02-a76f-77d31d981a5f" Jan 29 08:57:59 crc kubenswrapper[4895]: E0129 08:57:59.462854 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 29 08:57:59 crc kubenswrapper[4895]: E0129 08:57:59.463249 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zffbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-zpdkh_openstack-operators(f7276bca-f319-46bf-a1b4-92a6aec8e6e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:57:59 crc kubenswrapper[4895]: E0129 08:57:59.464559 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" podUID="f7276bca-f319-46bf-a1b4-92a6aec8e6e6" Jan 29 08:57:59 crc kubenswrapper[4895]: I0129 08:57:59.486244 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.126071 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgs4j"] Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.152882 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d"] Jan 29 08:58:00 crc kubenswrapper[4895]: W0129 08:58:00.159744 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94afc809_09fa_479a_b4d0_eea69e4ee4a5.slice/crio-e0d573bef73fd725d6a2aa88a6f526751531a0648014e06347a9cb8f0295fbf6 WatchSource:0}: Error finding container e0d573bef73fd725d6a2aa88a6f526751531a0648014e06347a9cb8f0295fbf6: Status 404 returned error can't find the container with id e0d573bef73fd725d6a2aa88a6f526751531a0648014e06347a9cb8f0295fbf6 Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.198671 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr"] Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.258034 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv" event={"ID":"c57b39e7-275d-4ef2-af51-3e0b014182ee","Type":"ContainerStarted","Data":"b6e15b9bcd3e44dd5e7c480e5d055b9a865522ad2bd042a89d883ac2a20a16f1"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.259337 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.261491 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj" event={"ID":"b2dd46da-1ebf-489f-8467-eab7fc206736","Type":"ContainerStarted","Data":"2f57571a6fd4941123d60fab0663c9e914c93752cf9d00a927ad81862995d794"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.261986 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.264883 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l" event={"ID":"e97a1d25-e9ba-4ce2-b172-035afb18721b","Type":"ContainerStarted","Data":"64776cd285faae580804e2fdd26a27e197bdca566efa8fa694c86b03fc32a534"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.265501 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.269298 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" event={"ID":"cbca22f6-6189-4f59-b9bd-832466c437d1","Type":"ContainerStarted","Data":"1e2757a6df4f0dd0b6c1de21b4474e773fd5deddf7e1cbda3e636773193ea992"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.277887 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-pq8r4" event={"ID":"358815d3-7542-429d-bfa0-742e75ada2f6","Type":"ContainerStarted","Data":"c6b934e2e1605ba81bb3c4b8e630a9f308633ad48ac78a8d382956288e8676db"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.278978 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-pq8r4" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.280683 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgs4j" event={"ID":"94afc809-09fa-479a-b4d0-eea69e4ee4a5","Type":"ContainerStarted","Data":"e0d573bef73fd725d6a2aa88a6f526751531a0648014e06347a9cb8f0295fbf6"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.281971 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6cz2h" event={"ID":"d4d2a9b0-6258-4257-9824-74abbbc40b24","Type":"ContainerStarted","Data":"6cc91aac46b7ab1df9520a23a8ee3c23e91991c1a7713c93095127581e92f4e4"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.282403 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6cz2h" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.288252 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sdkzk" event={"ID":"5e73fff0-3497-4937-bfe0-10bea87ddeb3","Type":"ContainerStarted","Data":"8ef0606ebd5baf98001a86a161fb24efe55a878e0ac3c358c3dfbf853dbbd597"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.296540 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d" event={"ID":"bc16fc79-c074-4969-af29-c46fdd06f9f8","Type":"ContainerStarted","Data":"972e406a947fb5019443eb8d33c9fdff6d9a2da35ec58a83e90de8075ed0c778"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.296789 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.304335 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss" event={"ID":"ad4a2a80-d64f-45b0-bea8-dc2e5c7ea050","Type":"ContainerStarted","Data":"f73418106ad42b39ef9d8ce2f7ae00331d7adbf579a28f97be59b79eaeb58afc"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.304565 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.349173 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dg5kf" event={"ID":"bb23ce65-61d9-4868-8008-7582ded2bff2","Type":"ContainerStarted","Data":"572afdf2ac637a8c68fbebc0fc9ebebf6dc7456506f35404d694d00e81514a11"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.350171 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dg5kf" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.360484 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq" event={"ID":"c268affd-83d0-4313-a5ba-ee20846ad416","Type":"ContainerStarted","Data":"b217b5ecbbb075254ffd8af9864a79ce7db136060fa94cba037c2cac49a2f4ca"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.371182 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fczp5" event={"ID":"853077df-3183-4811-8554-5940dc41912e","Type":"ContainerStarted","Data":"1cf79821b8a7afffd502ccc629ee4a96dc11ca3f1618450ad93c921ea1a386c1"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.371847 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fczp5" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.373172 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-gxq7x" event={"ID":"7573c3c1-4b9d-4175-beef-8a4d0c604b6a","Type":"ContainerStarted","Data":"8b401b6e4ac086cb6ce21f22345f8271ff359064ed2f9fd5c091529a14b8e689"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.373543 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-gxq7x" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.374818 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" event={"ID":"d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8","Type":"ContainerStarted","Data":"94767a586f7bc56f637df99adf582e388984d3d22525a6d7c317c34f11c00580"} Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.583761 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv" podStartSLOduration=5.185257636 podStartE2EDuration="46.583736254s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="2026-01-29 08:57:18.084773604 +0000 UTC m=+979.726281750" lastFinishedPulling="2026-01-29 08:57:59.483252222 +0000 UTC m=+1021.124760368" observedRunningTime="2026-01-29 08:58:00.437319581 +0000 UTC m=+1022.078827727" watchObservedRunningTime="2026-01-29 08:58:00.583736254 +0000 UTC m=+1022.225244400" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.597645 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj" podStartSLOduration=6.094191702 podStartE2EDuration="47.597609815s" podCreationTimestamp="2026-01-29 08:57:13 +0000 UTC" firstStartedPulling="2026-01-29 08:57:18.032310102 +0000 UTC m=+979.673818248" lastFinishedPulling="2026-01-29 08:57:59.535728215 +0000 UTC m=+1021.177236361" observedRunningTime="2026-01-29 08:58:00.58059162 +0000 UTC m=+1022.222099786" watchObservedRunningTime="2026-01-29 08:58:00.597609815 +0000 UTC m=+1022.239117961" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.622785 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-gxq7x" podStartSLOduration=5.202317121 podStartE2EDuration="46.622761627s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="2026-01-29 08:57:18.059263412 +0000 UTC m=+979.700771558" lastFinishedPulling="2026-01-29 08:57:59.479707928 +0000 UTC m=+1021.121216064" observedRunningTime="2026-01-29 08:58:00.612794101 +0000 UTC m=+1022.254302247" watchObservedRunningTime="2026-01-29 08:58:00.622761627 +0000 UTC m=+1022.264269773" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.691319 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d" podStartSLOduration=4.382784062 podStartE2EDuration="47.691288159s" podCreationTimestamp="2026-01-29 08:57:13 +0000 UTC" firstStartedPulling="2026-01-29 08:57:16.207394938 +0000 UTC m=+977.848903084" lastFinishedPulling="2026-01-29 08:57:59.515899035 +0000 UTC m=+1021.157407181" observedRunningTime="2026-01-29 08:58:00.644817247 +0000 UTC m=+1022.286325403" watchObservedRunningTime="2026-01-29 08:58:00.691288159 +0000 UTC m=+1022.332796305" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.701054 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-pq8r4" podStartSLOduration=8.001469143 podStartE2EDuration="46.701025939s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="2026-01-29 08:57:18.255202599 +0000 UTC m=+979.896710745" lastFinishedPulling="2026-01-29 08:57:56.954759395 +0000 UTC m=+1018.596267541" observedRunningTime="2026-01-29 08:58:00.682423632 +0000 UTC m=+1022.323931768" watchObservedRunningTime="2026-01-29 08:58:00.701025939 +0000 UTC m=+1022.342534085" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.719963 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dg5kf" podStartSLOduration=7.8015584 podStartE2EDuration="46.719945825s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="2026-01-29 08:57:18.037150001 +0000 UTC m=+979.678658147" lastFinishedPulling="2026-01-29 08:57:56.955537436 +0000 UTC m=+1018.597045572" observedRunningTime="2026-01-29 08:58:00.719508003 +0000 UTC m=+1022.361016149" watchObservedRunningTime="2026-01-29 08:58:00.719945825 +0000 UTC m=+1022.361453971" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.791463 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6cz2h" podStartSLOduration=8.869693921 podStartE2EDuration="47.791432946s" podCreationTimestamp="2026-01-29 08:57:13 +0000 UTC" firstStartedPulling="2026-01-29 08:57:16.30662885 +0000 UTC m=+977.948136996" lastFinishedPulling="2026-01-29 08:57:55.228367875 +0000 UTC m=+1016.869876021" observedRunningTime="2026-01-29 08:58:00.784417878 +0000 UTC m=+1022.425926024" watchObservedRunningTime="2026-01-29 08:58:00.791432946 +0000 UTC m=+1022.432941092" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.817249 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l" podStartSLOduration=4.879021455 podStartE2EDuration="47.817226455s" podCreationTimestamp="2026-01-29 08:57:13 +0000 UTC" firstStartedPulling="2026-01-29 08:57:16.597488454 +0000 UTC m=+978.238996600" lastFinishedPulling="2026-01-29 08:57:59.535693454 +0000 UTC m=+1021.177201600" observedRunningTime="2026-01-29 08:58:00.815467568 +0000 UTC m=+1022.456975724" watchObservedRunningTime="2026-01-29 08:58:00.817226455 +0000 UTC m=+1022.458734601" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.852608 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss" podStartSLOduration=4.966488632 podStartE2EDuration="47.852582499s" podCreationTimestamp="2026-01-29 08:57:13 +0000 UTC" firstStartedPulling="2026-01-29 08:57:16.597069503 +0000 UTC m=+978.238577649" lastFinishedPulling="2026-01-29 08:57:59.48316337 +0000 UTC m=+1021.124671516" observedRunningTime="2026-01-29 08:58:00.842643634 +0000 UTC m=+1022.484151780" watchObservedRunningTime="2026-01-29 08:58:00.852582499 +0000 UTC m=+1022.494090645" Jan 29 08:58:00 crc kubenswrapper[4895]: I0129 08:58:00.886396 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fczp5" podStartSLOduration=8.003052335 podStartE2EDuration="46.886365642s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="2026-01-29 08:57:18.072570438 +0000 UTC m=+979.714078584" lastFinishedPulling="2026-01-29 08:57:56.955883745 +0000 UTC m=+1018.597391891" observedRunningTime="2026-01-29 08:58:00.881318838 +0000 UTC m=+1022.522826994" watchObservedRunningTime="2026-01-29 08:58:00.886365642 +0000 UTC m=+1022.527873788" Jan 29 08:58:01 crc kubenswrapper[4895]: I0129 08:58:01.458313 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" event={"ID":"22d12b29-fd4e-4aa2-9081-a79a3a539dab","Type":"ContainerStarted","Data":"493b742a8b2a791cefeabcf8829158bd38003305ba93ffede279142c69cc2971"} Jan 29 08:58:01 crc kubenswrapper[4895]: I0129 08:58:01.458373 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" event={"ID":"22d12b29-fd4e-4aa2-9081-a79a3a539dab","Type":"ContainerStarted","Data":"6fe535579ff9b740f7c3ca496e13d65a6ebd17b2c3b1541cfb5620d30cb44e7c"} Jan 29 08:58:01 crc kubenswrapper[4895]: I0129 08:58:01.458627 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:58:01 crc kubenswrapper[4895]: I0129 08:58:01.473738 4895 generic.go:334] "Generic (PLEG): container finished" podID="94afc809-09fa-479a-b4d0-eea69e4ee4a5" containerID="12fb14a86d0d2238f3dae3d8c0fc91f0bb160091d70d36e80d1e4fe9d31ed390" exitCode=0 Jan 29 08:58:01 crc kubenswrapper[4895]: I0129 08:58:01.475275 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgs4j" event={"ID":"94afc809-09fa-479a-b4d0-eea69e4ee4a5","Type":"ContainerDied","Data":"12fb14a86d0d2238f3dae3d8c0fc91f0bb160091d70d36e80d1e4fe9d31ed390"} Jan 29 08:58:01 crc kubenswrapper[4895]: I0129 08:58:01.475379 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq" Jan 29 08:58:01 crc kubenswrapper[4895]: I0129 08:58:01.670093 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" podStartSLOduration=47.67007075 podStartE2EDuration="47.67007075s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:58:01.669345921 +0000 UTC m=+1023.310854067" watchObservedRunningTime="2026-01-29 08:58:01.67007075 +0000 UTC m=+1023.311578896" Jan 29 08:58:01 crc kubenswrapper[4895]: I0129 08:58:01.735244 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq" podStartSLOduration=6.32434969 podStartE2EDuration="47.735209489s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="2026-01-29 08:57:18.069713601 +0000 UTC m=+979.711221747" lastFinishedPulling="2026-01-29 08:57:59.48057341 +0000 UTC m=+1021.122081546" observedRunningTime="2026-01-29 08:58:01.721856802 +0000 UTC m=+1023.363364958" watchObservedRunningTime="2026-01-29 08:58:01.735209489 +0000 UTC m=+1023.376717645" Jan 29 08:58:01 crc kubenswrapper[4895]: I0129 08:58:01.811361 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sdkzk" podStartSLOduration=9.910265142 podStartE2EDuration="48.811338774s" podCreationTimestamp="2026-01-29 08:57:13 +0000 UTC" firstStartedPulling="2026-01-29 08:57:18.053005065 +0000 UTC m=+979.694513221" lastFinishedPulling="2026-01-29 08:57:56.954078717 +0000 UTC m=+1018.595586853" observedRunningTime="2026-01-29 08:58:01.804797458 +0000 UTC m=+1023.446305604" watchObservedRunningTime="2026-01-29 08:58:01.811338774 +0000 UTC m=+1023.452846920" Jan 29 08:58:03 crc kubenswrapper[4895]: I0129 08:58:03.584250 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgs4j" event={"ID":"94afc809-09fa-479a-b4d0-eea69e4ee4a5","Type":"ContainerStarted","Data":"a8acee2ed2c8aa5f8c49aab39ef70908e386b151833c36659ad359287c955362"} Jan 29 08:58:04 crc kubenswrapper[4895]: I0129 08:58:04.164299 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6cz2h" Jan 29 08:58:04 crc kubenswrapper[4895]: I0129 08:58:04.404419 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sdkzk" Jan 29 08:58:04 crc kubenswrapper[4895]: I0129 08:58:04.460835 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9dpss" Jan 29 08:58:04 crc kubenswrapper[4895]: I0129 08:58:04.602041 4895 generic.go:334] "Generic (PLEG): container finished" podID="94afc809-09fa-479a-b4d0-eea69e4ee4a5" containerID="a8acee2ed2c8aa5f8c49aab39ef70908e386b151833c36659ad359287c955362" exitCode=0 Jan 29 08:58:04 crc kubenswrapper[4895]: I0129 08:58:04.602088 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgs4j" event={"ID":"94afc809-09fa-479a-b4d0-eea69e4ee4a5","Type":"ContainerDied","Data":"a8acee2ed2c8aa5f8c49aab39ef70908e386b151833c36659ad359287c955362"} Jan 29 08:58:05 crc kubenswrapper[4895]: I0129 08:58:05.342276 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-zbdxv" Jan 29 08:58:05 crc kubenswrapper[4895]: I0129 08:58:05.352643 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-pq8r4" Jan 29 08:58:05 crc kubenswrapper[4895]: I0129 08:58:05.352794 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-pntdq" Jan 29 08:58:05 crc kubenswrapper[4895]: I0129 08:58:05.408108 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-gxq7x" Jan 29 08:58:05 crc kubenswrapper[4895]: I0129 08:58:05.846354 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-fczp5" Jan 29 08:58:05 crc kubenswrapper[4895]: I0129 08:58:05.953184 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-dg5kf" Jan 29 08:58:07 crc kubenswrapper[4895]: E0129 08:58:07.214872 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" podUID="7520cf55-cb4a-4598-80d9-499ab60f5ff1" Jan 29 08:58:07 crc kubenswrapper[4895]: I0129 08:58:07.468358 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-569b5dc57f-cn6fr" Jan 29 08:58:08 crc kubenswrapper[4895]: I0129 08:58:08.654156 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" event={"ID":"d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8","Type":"ContainerStarted","Data":"bd9110c877dfdecc862d3aad73db20505f605f599ddeaffe72dbd5505b083c3f"} Jan 29 08:58:08 crc kubenswrapper[4895]: I0129 08:58:08.655853 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:58:08 crc kubenswrapper[4895]: I0129 08:58:08.657460 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" event={"ID":"cbca22f6-6189-4f59-b9bd-832466c437d1","Type":"ContainerStarted","Data":"5b8a771fdc23bbc8af26cf62c5f66576ffaa37d3a136ee8a79feac0f6f1c344d"} Jan 29 08:58:08 crc kubenswrapper[4895]: I0129 08:58:08.658045 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:58:08 crc kubenswrapper[4895]: I0129 08:58:08.660125 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgs4j" event={"ID":"94afc809-09fa-479a-b4d0-eea69e4ee4a5","Type":"ContainerStarted","Data":"5c7f3102f21a39363e98c2fd59bdaaaba38b1ccdcf0d79a18918d7952e39af7e"} Jan 29 08:58:08 crc kubenswrapper[4895]: I0129 08:58:08.709509 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" podStartSLOduration=47.26380582 podStartE2EDuration="54.709488126s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="2026-01-29 08:58:00.198262522 +0000 UTC m=+1021.839770668" lastFinishedPulling="2026-01-29 08:58:07.643944828 +0000 UTC m=+1029.285452974" observedRunningTime="2026-01-29 08:58:08.69843279 +0000 UTC m=+1030.339940946" watchObservedRunningTime="2026-01-29 08:58:08.709488126 +0000 UTC m=+1030.350996272" Jan 29 08:58:08 crc kubenswrapper[4895]: I0129 08:58:08.733726 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tgs4j" podStartSLOduration=24.570957794999998 podStartE2EDuration="30.733700393s" podCreationTimestamp="2026-01-29 08:57:38 +0000 UTC" firstStartedPulling="2026-01-29 08:58:01.481128678 +0000 UTC m=+1023.122636824" lastFinishedPulling="2026-01-29 08:58:07.643871276 +0000 UTC m=+1029.285379422" observedRunningTime="2026-01-29 08:58:08.727042674 +0000 UTC m=+1030.368550830" watchObservedRunningTime="2026-01-29 08:58:08.733700393 +0000 UTC m=+1030.375208539" Jan 29 08:58:08 crc kubenswrapper[4895]: I0129 08:58:08.807407 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" podStartSLOduration=47.649485941 podStartE2EDuration="55.807379392s" podCreationTimestamp="2026-01-29 08:57:13 +0000 UTC" firstStartedPulling="2026-01-29 08:57:59.485943864 +0000 UTC m=+1021.127452010" lastFinishedPulling="2026-01-29 08:58:07.643837325 +0000 UTC m=+1029.285345461" observedRunningTime="2026-01-29 08:58:08.771681308 +0000 UTC m=+1030.413189464" watchObservedRunningTime="2026-01-29 08:58:08.807379392 +0000 UTC m=+1030.448887538" Jan 29 08:58:08 crc kubenswrapper[4895]: I0129 08:58:08.961504 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:58:08 crc kubenswrapper[4895]: I0129 08:58:08.961565 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:58:09 crc kubenswrapper[4895]: I0129 08:58:09.672185 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2" event={"ID":"001b758d-81ef-40e5-b53a-7c264915580d","Type":"ContainerStarted","Data":"55bacbf5d18e6d63a153cbc891a8b2a48b7a2fd421674ce83795143f835083b8"} Jan 29 08:58:09 crc kubenswrapper[4895]: I0129 08:58:09.673115 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2" Jan 29 08:58:09 crc kubenswrapper[4895]: I0129 08:58:09.692518 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2" podStartSLOduration=4.8670699729999995 podStartE2EDuration="55.692492578s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="2026-01-29 08:57:17.970090009 +0000 UTC m=+979.611598155" lastFinishedPulling="2026-01-29 08:58:08.795512594 +0000 UTC m=+1030.437020760" observedRunningTime="2026-01-29 08:58:09.68885011 +0000 UTC m=+1031.330358256" watchObservedRunningTime="2026-01-29 08:58:09.692492578 +0000 UTC m=+1031.334000724" Jan 29 08:58:10 crc kubenswrapper[4895]: I0129 08:58:10.016734 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-tgs4j" podUID="94afc809-09fa-479a-b4d0-eea69e4ee4a5" containerName="registry-server" probeResult="failure" output=< Jan 29 08:58:10 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 08:58:10 crc kubenswrapper[4895]: > Jan 29 08:58:10 crc kubenswrapper[4895]: I0129 08:58:10.684042 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb" event={"ID":"baa89b4d-cf32-498b-a624-585afea7f964","Type":"ContainerStarted","Data":"943cf57d90ee24af1abf2a02b95424ae62ce10de89c1e8123124c63dc950bd8e"} Jan 29 08:58:10 crc kubenswrapper[4895]: I0129 08:58:10.684983 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb" Jan 29 08:58:10 crc kubenswrapper[4895]: I0129 08:58:10.705984 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb" podStartSLOduration=5.091197262 podStartE2EDuration="56.705960964s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="2026-01-29 08:57:17.978046872 +0000 UTC m=+979.619555018" lastFinishedPulling="2026-01-29 08:58:09.592810574 +0000 UTC m=+1031.234318720" observedRunningTime="2026-01-29 08:58:10.698806853 +0000 UTC m=+1032.340315009" watchObservedRunningTime="2026-01-29 08:58:10.705960964 +0000 UTC m=+1032.347469110" Jan 29 08:58:11 crc kubenswrapper[4895]: E0129 08:58:11.213126 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2" podUID="1c9af700-ef2b-4d02-a76f-77d31d981a5f" Jan 29 08:58:11 crc kubenswrapper[4895]: E0129 08:58:11.213273 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" podUID="6bf40523-2804-408c-b50d-cb04bf5b32fc" Jan 29 08:58:12 crc kubenswrapper[4895]: E0129 08:58:12.213707 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" podUID="bf9282d5-a557-4321-b05d-35552e124429" Jan 29 08:58:12 crc kubenswrapper[4895]: I0129 08:58:12.758799 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd" event={"ID":"348e067e-1b54-43e2-9c01-bf430f7a3630","Type":"ContainerStarted","Data":"cc89d657f791bbb8284064e6f7c0d16be95c9e689aaf858f07cd2551b491ddcb"} Jan 29 08:58:12 crc kubenswrapper[4895]: I0129 08:58:12.759215 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd" Jan 29 08:58:12 crc kubenswrapper[4895]: I0129 08:58:12.779114 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd" podStartSLOduration=5.157623806 podStartE2EDuration="58.77908545s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="2026-01-29 08:57:18.032058155 +0000 UTC m=+979.673566301" lastFinishedPulling="2026-01-29 08:58:11.653519799 +0000 UTC m=+1033.295027945" observedRunningTime="2026-01-29 08:58:12.774269382 +0000 UTC m=+1034.415777528" watchObservedRunningTime="2026-01-29 08:58:12.77908545 +0000 UTC m=+1034.420593596" Jan 29 08:58:14 crc kubenswrapper[4895]: I0129 08:58:14.139067 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-gd75d" Jan 29 08:58:14 crc kubenswrapper[4895]: E0129 08:58:14.213470 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" podUID="f7276bca-f319-46bf-a1b4-92a6aec8e6e6" Jan 29 08:58:14 crc kubenswrapper[4895]: I0129 08:58:14.311025 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-58zzj" Jan 29 08:58:14 crc kubenswrapper[4895]: I0129 08:58:14.322236 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7hp5l" Jan 29 08:58:14 crc kubenswrapper[4895]: I0129 08:58:14.407040 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-sdkzk" Jan 29 08:58:14 crc kubenswrapper[4895]: I0129 08:58:14.575571 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-54c4948594-l45qb" Jan 29 08:58:15 crc kubenswrapper[4895]: I0129 08:58:15.317798 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-qz9c2" Jan 29 08:58:19 crc kubenswrapper[4895]: I0129 08:58:19.004523 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:58:19 crc kubenswrapper[4895]: I0129 08:58:19.055212 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:58:19 crc kubenswrapper[4895]: I0129 08:58:19.243094 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgs4j"] Jan 29 08:58:20 crc kubenswrapper[4895]: I0129 08:58:20.405130 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tptkw" Jan 29 08:58:20 crc kubenswrapper[4895]: I0129 08:58:20.582237 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d" Jan 29 08:58:20 crc kubenswrapper[4895]: I0129 08:58:20.823541 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tgs4j" podUID="94afc809-09fa-479a-b4d0-eea69e4ee4a5" containerName="registry-server" containerID="cri-o://5c7f3102f21a39363e98c2fd59bdaaaba38b1ccdcf0d79a18918d7952e39af7e" gracePeriod=2 Jan 29 08:58:22 crc kubenswrapper[4895]: I0129 08:58:22.851716 4895 generic.go:334] "Generic (PLEG): container finished" podID="94afc809-09fa-479a-b4d0-eea69e4ee4a5" containerID="5c7f3102f21a39363e98c2fd59bdaaaba38b1ccdcf0d79a18918d7952e39af7e" exitCode=0 Jan 29 08:58:22 crc kubenswrapper[4895]: I0129 08:58:22.851763 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgs4j" event={"ID":"94afc809-09fa-479a-b4d0-eea69e4ee4a5","Type":"ContainerDied","Data":"5c7f3102f21a39363e98c2fd59bdaaaba38b1ccdcf0d79a18918d7952e39af7e"} Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.043787 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.076521 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94afc809-09fa-479a-b4d0-eea69e4ee4a5-utilities\") pod \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\" (UID: \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\") " Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.076732 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94afc809-09fa-479a-b4d0-eea69e4ee4a5-catalog-content\") pod \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\" (UID: \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\") " Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.076867 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmgs2\" (UniqueName: \"kubernetes.io/projected/94afc809-09fa-479a-b4d0-eea69e4ee4a5-kube-api-access-tmgs2\") pod \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\" (UID: \"94afc809-09fa-479a-b4d0-eea69e4ee4a5\") " Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.078018 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94afc809-09fa-479a-b4d0-eea69e4ee4a5-utilities" (OuterVolumeSpecName: "utilities") pod "94afc809-09fa-479a-b4d0-eea69e4ee4a5" (UID: "94afc809-09fa-479a-b4d0-eea69e4ee4a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.088051 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94afc809-09fa-479a-b4d0-eea69e4ee4a5-kube-api-access-tmgs2" (OuterVolumeSpecName: "kube-api-access-tmgs2") pod "94afc809-09fa-479a-b4d0-eea69e4ee4a5" (UID: "94afc809-09fa-479a-b4d0-eea69e4ee4a5"). InnerVolumeSpecName "kube-api-access-tmgs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.099701 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94afc809-09fa-479a-b4d0-eea69e4ee4a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94afc809-09fa-479a-b4d0-eea69e4ee4a5" (UID: "94afc809-09fa-479a-b4d0-eea69e4ee4a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.181738 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94afc809-09fa-479a-b4d0-eea69e4ee4a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.181771 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94afc809-09fa-479a-b4d0-eea69e4ee4a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.181788 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmgs2\" (UniqueName: \"kubernetes.io/projected/94afc809-09fa-479a-b4d0-eea69e4ee4a5-kube-api-access-tmgs2\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.862629 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgs4j" event={"ID":"94afc809-09fa-479a-b4d0-eea69e4ee4a5","Type":"ContainerDied","Data":"e0d573bef73fd725d6a2aa88a6f526751531a0648014e06347a9cb8f0295fbf6"} Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.863142 4895 scope.go:117] "RemoveContainer" containerID="5c7f3102f21a39363e98c2fd59bdaaaba38b1ccdcf0d79a18918d7952e39af7e" Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.862967 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgs4j" Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.868405 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" event={"ID":"7520cf55-cb4a-4598-80d9-499ab60f5ff1","Type":"ContainerStarted","Data":"f7f791ff8fd662bbe2a8f09f74debfdb11de1b55951eafd75a9ce5bd8280a211"} Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.869605 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.885574 4895 scope.go:117] "RemoveContainer" containerID="a8acee2ed2c8aa5f8c49aab39ef70908e386b151833c36659ad359287c955362" Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.900346 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" podStartSLOduration=4.913312748 podStartE2EDuration="1m9.900322159s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="2026-01-29 08:57:18.350204088 +0000 UTC m=+979.991712234" lastFinishedPulling="2026-01-29 08:58:23.337213499 +0000 UTC m=+1044.978721645" observedRunningTime="2026-01-29 08:58:23.887552777 +0000 UTC m=+1045.529060933" watchObservedRunningTime="2026-01-29 08:58:23.900322159 +0000 UTC m=+1045.541830295" Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.909808 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgs4j"] Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.918463 4895 scope.go:117] "RemoveContainer" containerID="12fb14a86d0d2238f3dae3d8c0fc91f0bb160091d70d36e80d1e4fe9d31ed390" Jan 29 08:58:23 crc kubenswrapper[4895]: I0129 08:58:23.925018 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgs4j"] Jan 29 08:58:24 crc kubenswrapper[4895]: I0129 08:58:24.878959 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2" event={"ID":"1c9af700-ef2b-4d02-a76f-77d31d981a5f","Type":"ContainerStarted","Data":"ba83175db6902dec6cb810aa83bfcb8b10493b2ddbcbf3a927de14cb5e1fc144"} Jan 29 08:58:24 crc kubenswrapper[4895]: I0129 08:58:24.900185 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fh6n2" podStartSLOduration=4.533349678 podStartE2EDuration="1m9.90016422s" podCreationTimestamp="2026-01-29 08:57:15 +0000 UTC" firstStartedPulling="2026-01-29 08:57:18.314406541 +0000 UTC m=+979.955914687" lastFinishedPulling="2026-01-29 08:58:23.681221083 +0000 UTC m=+1045.322729229" observedRunningTime="2026-01-29 08:58:24.897105109 +0000 UTC m=+1046.538613255" watchObservedRunningTime="2026-01-29 08:58:24.90016422 +0000 UTC m=+1046.541672366" Jan 29 08:58:25 crc kubenswrapper[4895]: I0129 08:58:25.223486 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94afc809-09fa-479a-b4d0-eea69e4ee4a5" path="/var/lib/kubelet/pods/94afc809-09fa-479a-b4d0-eea69e4ee4a5/volumes" Jan 29 08:58:25 crc kubenswrapper[4895]: I0129 08:58:25.318240 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-8t4nd" Jan 29 08:58:27 crc kubenswrapper[4895]: I0129 08:58:27.911154 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" event={"ID":"f7276bca-f319-46bf-a1b4-92a6aec8e6e6","Type":"ContainerStarted","Data":"952f4e64ef8935e0bdc0e24d588a617a713ae2632ff4f677e1d5db8810f46d7f"} Jan 29 08:58:27 crc kubenswrapper[4895]: I0129 08:58:27.912274 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" Jan 29 08:58:27 crc kubenswrapper[4895]: I0129 08:58:27.914721 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" event={"ID":"bf9282d5-a557-4321-b05d-35552e124429","Type":"ContainerStarted","Data":"fd635b85317f1255ea9cd0462a0ef0d9c70ad6952c100d6414f14f759a0987cf"} Jan 29 08:58:27 crc kubenswrapper[4895]: I0129 08:58:27.914986 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" Jan 29 08:58:27 crc kubenswrapper[4895]: I0129 08:58:27.916645 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" event={"ID":"6bf40523-2804-408c-b50d-cb04bf5b32fc","Type":"ContainerStarted","Data":"98188122fd27668317ba226b0845aa959fba96888eb124b49c1ec6ae33d12cd9"} Jan 29 08:58:27 crc kubenswrapper[4895]: I0129 08:58:27.916906 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" Jan 29 08:58:27 crc kubenswrapper[4895]: I0129 08:58:27.934461 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" podStartSLOduration=4.654467771 podStartE2EDuration="1m13.934440816s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="2026-01-29 08:57:18.340680934 +0000 UTC m=+979.982189080" lastFinishedPulling="2026-01-29 08:58:27.620653979 +0000 UTC m=+1049.262162125" observedRunningTime="2026-01-29 08:58:27.931380833 +0000 UTC m=+1049.572888979" watchObservedRunningTime="2026-01-29 08:58:27.934440816 +0000 UTC m=+1049.575948962" Jan 29 08:58:27 crc kubenswrapper[4895]: I0129 08:58:27.951767 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" podStartSLOduration=5.600797332 podStartE2EDuration="1m13.951732038s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="2026-01-29 08:57:18.340350345 +0000 UTC m=+979.981858481" lastFinishedPulling="2026-01-29 08:58:26.691285041 +0000 UTC m=+1048.332793187" observedRunningTime="2026-01-29 08:58:27.949018925 +0000 UTC m=+1049.590527071" watchObservedRunningTime="2026-01-29 08:58:27.951732038 +0000 UTC m=+1049.593240184" Jan 29 08:58:27 crc kubenswrapper[4895]: I0129 08:58:27.980896 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" podStartSLOduration=5.60814693 podStartE2EDuration="1m13.980872557s" podCreationTimestamp="2026-01-29 08:57:14 +0000 UTC" firstStartedPulling="2026-01-29 08:57:18.319537049 +0000 UTC m=+979.961045195" lastFinishedPulling="2026-01-29 08:58:26.692262676 +0000 UTC m=+1048.333770822" observedRunningTime="2026-01-29 08:58:27.974702211 +0000 UTC m=+1049.616210367" watchObservedRunningTime="2026-01-29 08:58:27.980872557 +0000 UTC m=+1049.622380703" Jan 29 08:58:35 crc kubenswrapper[4895]: I0129 08:58:35.017907 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mj7xz" Jan 29 08:58:35 crc kubenswrapper[4895]: I0129 08:58:35.317649 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-mnp2h" Jan 29 08:58:35 crc kubenswrapper[4895]: I0129 08:58:35.320691 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zpdkh" Jan 29 08:58:35 crc kubenswrapper[4895]: I0129 08:58:35.365404 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4zrlz" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.805816 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ztblm"] Jan 29 08:58:50 crc kubenswrapper[4895]: E0129 08:58:50.806715 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94afc809-09fa-479a-b4d0-eea69e4ee4a5" containerName="extract-utilities" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.806730 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="94afc809-09fa-479a-b4d0-eea69e4ee4a5" containerName="extract-utilities" Jan 29 08:58:50 crc kubenswrapper[4895]: E0129 08:58:50.806738 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94afc809-09fa-479a-b4d0-eea69e4ee4a5" containerName="registry-server" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.806744 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="94afc809-09fa-479a-b4d0-eea69e4ee4a5" containerName="registry-server" Jan 29 08:58:50 crc kubenswrapper[4895]: E0129 08:58:50.806775 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94afc809-09fa-479a-b4d0-eea69e4ee4a5" containerName="extract-content" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.806781 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="94afc809-09fa-479a-b4d0-eea69e4ee4a5" containerName="extract-content" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.806975 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="94afc809-09fa-479a-b4d0-eea69e4ee4a5" containerName="registry-server" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.807736 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-ztblm" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.814280 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.814335 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.814622 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7wtk6" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.814619 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.831733 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ztblm"] Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.856606 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdxvm\" (UniqueName: \"kubernetes.io/projected/6fd978bd-9d18-4f1c-a8c5-e071e14ea337-kube-api-access-hdxvm\") pod \"dnsmasq-dns-675f4bcbfc-ztblm\" (UID: \"6fd978bd-9d18-4f1c-a8c5-e071e14ea337\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ztblm" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.856723 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd978bd-9d18-4f1c-a8c5-e071e14ea337-config\") pod \"dnsmasq-dns-675f4bcbfc-ztblm\" (UID: \"6fd978bd-9d18-4f1c-a8c5-e071e14ea337\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ztblm" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.885358 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ncvl9"] Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.890315 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.900856 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ncvl9"] Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.901129 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.960072 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdxvm\" (UniqueName: \"kubernetes.io/projected/6fd978bd-9d18-4f1c-a8c5-e071e14ea337-kube-api-access-hdxvm\") pod \"dnsmasq-dns-675f4bcbfc-ztblm\" (UID: \"6fd978bd-9d18-4f1c-a8c5-e071e14ea337\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ztblm" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.960159 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd978bd-9d18-4f1c-a8c5-e071e14ea337-config\") pod \"dnsmasq-dns-675f4bcbfc-ztblm\" (UID: \"6fd978bd-9d18-4f1c-a8c5-e071e14ea337\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ztblm" Jan 29 08:58:50 crc kubenswrapper[4895]: I0129 08:58:50.961124 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd978bd-9d18-4f1c-a8c5-e071e14ea337-config\") pod \"dnsmasq-dns-675f4bcbfc-ztblm\" (UID: \"6fd978bd-9d18-4f1c-a8c5-e071e14ea337\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ztblm" Jan 29 08:58:51 crc kubenswrapper[4895]: I0129 08:58:51.001484 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdxvm\" (UniqueName: \"kubernetes.io/projected/6fd978bd-9d18-4f1c-a8c5-e071e14ea337-kube-api-access-hdxvm\") pod \"dnsmasq-dns-675f4bcbfc-ztblm\" (UID: \"6fd978bd-9d18-4f1c-a8c5-e071e14ea337\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ztblm" Jan 29 08:58:51 crc kubenswrapper[4895]: I0129 08:58:51.061877 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnxnf\" (UniqueName: \"kubernetes.io/projected/64de6c11-4446-427a-be04-3e23713ab128-kube-api-access-mnxnf\") pod \"dnsmasq-dns-78dd6ddcc-ncvl9\" (UID: \"64de6c11-4446-427a-be04-3e23713ab128\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" Jan 29 08:58:51 crc kubenswrapper[4895]: I0129 08:58:51.061961 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64de6c11-4446-427a-be04-3e23713ab128-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ncvl9\" (UID: \"64de6c11-4446-427a-be04-3e23713ab128\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" Jan 29 08:58:51 crc kubenswrapper[4895]: I0129 08:58:51.062254 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64de6c11-4446-427a-be04-3e23713ab128-config\") pod \"dnsmasq-dns-78dd6ddcc-ncvl9\" (UID: \"64de6c11-4446-427a-be04-3e23713ab128\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" Jan 29 08:58:51 crc kubenswrapper[4895]: I0129 08:58:51.129820 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-ztblm" Jan 29 08:58:51 crc kubenswrapper[4895]: I0129 08:58:51.163877 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnxnf\" (UniqueName: \"kubernetes.io/projected/64de6c11-4446-427a-be04-3e23713ab128-kube-api-access-mnxnf\") pod \"dnsmasq-dns-78dd6ddcc-ncvl9\" (UID: \"64de6c11-4446-427a-be04-3e23713ab128\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" Jan 29 08:58:51 crc kubenswrapper[4895]: I0129 08:58:51.164069 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64de6c11-4446-427a-be04-3e23713ab128-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ncvl9\" (UID: \"64de6c11-4446-427a-be04-3e23713ab128\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" Jan 29 08:58:51 crc kubenswrapper[4895]: I0129 08:58:51.164139 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64de6c11-4446-427a-be04-3e23713ab128-config\") pod \"dnsmasq-dns-78dd6ddcc-ncvl9\" (UID: \"64de6c11-4446-427a-be04-3e23713ab128\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" Jan 29 08:58:51 crc kubenswrapper[4895]: I0129 08:58:51.165399 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64de6c11-4446-427a-be04-3e23713ab128-config\") pod \"dnsmasq-dns-78dd6ddcc-ncvl9\" (UID: \"64de6c11-4446-427a-be04-3e23713ab128\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" Jan 29 08:58:51 crc kubenswrapper[4895]: I0129 08:58:51.168535 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64de6c11-4446-427a-be04-3e23713ab128-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ncvl9\" (UID: \"64de6c11-4446-427a-be04-3e23713ab128\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" Jan 29 08:58:51 crc kubenswrapper[4895]: I0129 08:58:51.184897 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnxnf\" (UniqueName: \"kubernetes.io/projected/64de6c11-4446-427a-be04-3e23713ab128-kube-api-access-mnxnf\") pod \"dnsmasq-dns-78dd6ddcc-ncvl9\" (UID: \"64de6c11-4446-427a-be04-3e23713ab128\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" Jan 29 08:58:51 crc kubenswrapper[4895]: I0129 08:58:51.216115 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" Jan 29 08:58:51 crc kubenswrapper[4895]: I0129 08:58:51.524633 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ztblm"] Jan 29 08:58:51 crc kubenswrapper[4895]: I0129 08:58:51.811686 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ncvl9"] Jan 29 08:58:51 crc kubenswrapper[4895]: W0129 08:58:51.816194 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64de6c11_4446_427a_be04_3e23713ab128.slice/crio-972ab06988612ed3b1c511c2d88b4458cd8ef1533b00cb6118bc493cbc4bf418 WatchSource:0}: Error finding container 972ab06988612ed3b1c511c2d88b4458cd8ef1533b00cb6118bc493cbc4bf418: Status 404 returned error can't find the container with id 972ab06988612ed3b1c511c2d88b4458cd8ef1533b00cb6118bc493cbc4bf418 Jan 29 08:58:52 crc kubenswrapper[4895]: I0129 08:58:52.143563 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" event={"ID":"64de6c11-4446-427a-be04-3e23713ab128","Type":"ContainerStarted","Data":"972ab06988612ed3b1c511c2d88b4458cd8ef1533b00cb6118bc493cbc4bf418"} Jan 29 08:58:52 crc kubenswrapper[4895]: I0129 08:58:52.145060 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-ztblm" event={"ID":"6fd978bd-9d18-4f1c-a8c5-e071e14ea337","Type":"ContainerStarted","Data":"6041f7600ebc6ab9f919f91014e1b005eaf978a8edb3e8167dd4b72a35362b43"} Jan 29 08:58:53 crc kubenswrapper[4895]: I0129 08:58:53.896693 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ztblm"] Jan 29 08:58:53 crc kubenswrapper[4895]: I0129 08:58:53.947257 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8msnl"] Jan 29 08:58:53 crc kubenswrapper[4895]: I0129 08:58:53.959264 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8msnl" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.048961 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8msnl"] Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.107101 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6868b7-0c63-4cbf-830d-a167983e116d-config\") pod \"dnsmasq-dns-666b6646f7-8msnl\" (UID: \"bd6868b7-0c63-4cbf-830d-a167983e116d\") " pod="openstack/dnsmasq-dns-666b6646f7-8msnl" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.107185 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6868b7-0c63-4cbf-830d-a167983e116d-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8msnl\" (UID: \"bd6868b7-0c63-4cbf-830d-a167983e116d\") " pod="openstack/dnsmasq-dns-666b6646f7-8msnl" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.107237 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsd4s\" (UniqueName: \"kubernetes.io/projected/bd6868b7-0c63-4cbf-830d-a167983e116d-kube-api-access-hsd4s\") pod \"dnsmasq-dns-666b6646f7-8msnl\" (UID: \"bd6868b7-0c63-4cbf-830d-a167983e116d\") " pod="openstack/dnsmasq-dns-666b6646f7-8msnl" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.208389 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6868b7-0c63-4cbf-830d-a167983e116d-config\") pod \"dnsmasq-dns-666b6646f7-8msnl\" (UID: \"bd6868b7-0c63-4cbf-830d-a167983e116d\") " pod="openstack/dnsmasq-dns-666b6646f7-8msnl" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.208491 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6868b7-0c63-4cbf-830d-a167983e116d-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8msnl\" (UID: \"bd6868b7-0c63-4cbf-830d-a167983e116d\") " pod="openstack/dnsmasq-dns-666b6646f7-8msnl" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.208544 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsd4s\" (UniqueName: \"kubernetes.io/projected/bd6868b7-0c63-4cbf-830d-a167983e116d-kube-api-access-hsd4s\") pod \"dnsmasq-dns-666b6646f7-8msnl\" (UID: \"bd6868b7-0c63-4cbf-830d-a167983e116d\") " pod="openstack/dnsmasq-dns-666b6646f7-8msnl" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.210069 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6868b7-0c63-4cbf-830d-a167983e116d-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8msnl\" (UID: \"bd6868b7-0c63-4cbf-830d-a167983e116d\") " pod="openstack/dnsmasq-dns-666b6646f7-8msnl" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.210126 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6868b7-0c63-4cbf-830d-a167983e116d-config\") pod \"dnsmasq-dns-666b6646f7-8msnl\" (UID: \"bd6868b7-0c63-4cbf-830d-a167983e116d\") " pod="openstack/dnsmasq-dns-666b6646f7-8msnl" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.228101 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ncvl9"] Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.263271 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsd4s\" (UniqueName: \"kubernetes.io/projected/bd6868b7-0c63-4cbf-830d-a167983e116d-kube-api-access-hsd4s\") pod \"dnsmasq-dns-666b6646f7-8msnl\" (UID: \"bd6868b7-0c63-4cbf-830d-a167983e116d\") " pod="openstack/dnsmasq-dns-666b6646f7-8msnl" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.264928 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-957ss"] Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.266331 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-957ss" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.293976 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-957ss"] Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.302966 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8msnl" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.413058 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21a3d792-6aca-4649-8321-8ee399ce37d6-config\") pod \"dnsmasq-dns-57d769cc4f-957ss\" (UID: \"21a3d792-6aca-4649-8321-8ee399ce37d6\") " pod="openstack/dnsmasq-dns-57d769cc4f-957ss" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.413272 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4mnv\" (UniqueName: \"kubernetes.io/projected/21a3d792-6aca-4649-8321-8ee399ce37d6-kube-api-access-m4mnv\") pod \"dnsmasq-dns-57d769cc4f-957ss\" (UID: \"21a3d792-6aca-4649-8321-8ee399ce37d6\") " pod="openstack/dnsmasq-dns-57d769cc4f-957ss" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.413304 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21a3d792-6aca-4649-8321-8ee399ce37d6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-957ss\" (UID: \"21a3d792-6aca-4649-8321-8ee399ce37d6\") " pod="openstack/dnsmasq-dns-57d769cc4f-957ss" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.516844 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4mnv\" (UniqueName: \"kubernetes.io/projected/21a3d792-6aca-4649-8321-8ee399ce37d6-kube-api-access-m4mnv\") pod \"dnsmasq-dns-57d769cc4f-957ss\" (UID: \"21a3d792-6aca-4649-8321-8ee399ce37d6\") " pod="openstack/dnsmasq-dns-57d769cc4f-957ss" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.516941 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21a3d792-6aca-4649-8321-8ee399ce37d6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-957ss\" (UID: \"21a3d792-6aca-4649-8321-8ee399ce37d6\") " pod="openstack/dnsmasq-dns-57d769cc4f-957ss" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.517035 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21a3d792-6aca-4649-8321-8ee399ce37d6-config\") pod \"dnsmasq-dns-57d769cc4f-957ss\" (UID: \"21a3d792-6aca-4649-8321-8ee399ce37d6\") " pod="openstack/dnsmasq-dns-57d769cc4f-957ss" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.518723 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21a3d792-6aca-4649-8321-8ee399ce37d6-config\") pod \"dnsmasq-dns-57d769cc4f-957ss\" (UID: \"21a3d792-6aca-4649-8321-8ee399ce37d6\") " pod="openstack/dnsmasq-dns-57d769cc4f-957ss" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.519181 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21a3d792-6aca-4649-8321-8ee399ce37d6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-957ss\" (UID: \"21a3d792-6aca-4649-8321-8ee399ce37d6\") " pod="openstack/dnsmasq-dns-57d769cc4f-957ss" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.662645 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4mnv\" (UniqueName: \"kubernetes.io/projected/21a3d792-6aca-4649-8321-8ee399ce37d6-kube-api-access-m4mnv\") pod \"dnsmasq-dns-57d769cc4f-957ss\" (UID: \"21a3d792-6aca-4649-8321-8ee399ce37d6\") " pod="openstack/dnsmasq-dns-57d769cc4f-957ss" Jan 29 08:58:54 crc kubenswrapper[4895]: I0129 08:58:54.931753 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-957ss" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.076590 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8msnl"] Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.088265 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.105950 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.109511 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.109770 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.113732 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.113747 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-grh7r" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.113812 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.114152 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.114208 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.114969 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.242815 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8msnl" event={"ID":"bd6868b7-0c63-4cbf-830d-a167983e116d","Type":"ContainerStarted","Data":"7176788c68fb148c4380d2f0b8074c28031e4ab03bd91556087183e0db4d1f4c"} Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.441037 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.441104 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.441154 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.441189 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.441230 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntw6t\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-kube-api-access-ntw6t\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.441274 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.441308 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.441327 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.441365 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.441437 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.441477 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-config-data\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.544840 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.544883 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-config-data\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.544905 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.544945 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.544967 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.544991 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.545024 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.545038 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntw6t\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-kube-api-access-ntw6t\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.545064 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.545080 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.545109 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.546476 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.547628 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.548417 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.549554 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.550309 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-config-data\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.550626 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.586124 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.587169 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.599814 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.609615 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.610245 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntw6t\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-kube-api-access-ntw6t\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.625993 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.739505 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 08:58:55 crc kubenswrapper[4895]: I0129 08:58:55.789554 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-957ss"] Jan 29 08:58:55 crc kubenswrapper[4895]: W0129 08:58:55.921473 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21a3d792_6aca_4649_8321_8ee399ce37d6.slice/crio-b77a137c63e1083ce7b7d9f49157a8a14cea2c5e9bcc5b3faef864432fa8e906 WatchSource:0}: Error finding container b77a137c63e1083ce7b7d9f49157a8a14cea2c5e9bcc5b3faef864432fa8e906: Status 404 returned error can't find the container with id b77a137c63e1083ce7b7d9f49157a8a14cea2c5e9bcc5b3faef864432fa8e906 Jan 29 08:58:56 crc kubenswrapper[4895]: I0129 08:58:56.262408 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-957ss" event={"ID":"21a3d792-6aca-4649-8321-8ee399ce37d6","Type":"ContainerStarted","Data":"b77a137c63e1083ce7b7d9f49157a8a14cea2c5e9bcc5b3faef864432fa8e906"} Jan 29 08:58:56 crc kubenswrapper[4895]: I0129 08:58:56.559140 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:58:56 crc kubenswrapper[4895]: W0129 08:58:56.605988 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d3ea6f8_e1cd_41fe_8169_00fc80c995b5.slice/crio-cec0643400228dbfcee52a5e2739034397e89c08765866ff8cd986ac1b1ce4db WatchSource:0}: Error finding container cec0643400228dbfcee52a5e2739034397e89c08765866ff8cd986ac1b1ce4db: Status 404 returned error can't find the container with id cec0643400228dbfcee52a5e2739034397e89c08765866ff8cd986ac1b1ce4db Jan 29 08:58:56 crc kubenswrapper[4895]: I0129 08:58:56.869081 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 08:58:56 crc kubenswrapper[4895]: I0129 08:58:56.875641 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 08:58:56 crc kubenswrapper[4895]: I0129 08:58:56.884060 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 08:58:56 crc kubenswrapper[4895]: I0129 08:58:56.884818 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-skqnk" Jan 29 08:58:56 crc kubenswrapper[4895]: I0129 08:58:56.887664 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 08:58:56 crc kubenswrapper[4895]: I0129 08:58:56.888297 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 08:58:56 crc kubenswrapper[4895]: I0129 08:58:56.923583 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 08:58:56 crc kubenswrapper[4895]: I0129 08:58:56.931790 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.022158 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-config-data-generated\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.022642 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-config-data-default\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.022743 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbllc\" (UniqueName: \"kubernetes.io/projected/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-kube-api-access-dbllc\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.022874 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-kolla-config\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.023019 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.023133 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.023261 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-operator-scripts\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.023569 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.125605 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-config-data-generated\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.125676 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-config-data-default\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.125695 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbllc\" (UniqueName: \"kubernetes.io/projected/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-kube-api-access-dbllc\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.125728 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-kolla-config\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.125754 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.125785 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.125818 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-operator-scripts\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.125899 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.139598 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-kolla-config\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.140671 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-config-data-generated\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.141429 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-config-data-default\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.142044 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.155722 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.164497 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-operator-scripts\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.165386 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbllc\" (UniqueName: \"kubernetes.io/projected/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-kube-api-access-dbllc\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.167245 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.168126 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94\") " pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.228899 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.275738 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5","Type":"ContainerStarted","Data":"cec0643400228dbfcee52a5e2739034397e89c08765866ff8cd986ac1b1ce4db"} Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.689384 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.691092 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.697534 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dxlcp" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.701938 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.702188 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.702874 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.703017 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.703195 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.703310 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.705650 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.840825 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.841406 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.841472 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.841492 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2wvv\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-kube-api-access-q2wvv\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.841513 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.841528 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cbcad4af-7c93-4d6e-b825-42a586db5d81-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.841571 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.841599 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.841629 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cbcad4af-7c93-4d6e-b825-42a586db5d81-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.841660 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.841677 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.841692 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.845022 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.850458 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.850563 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.850808 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.850826 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-wfxwz" Jan 29 08:58:57 crc kubenswrapper[4895]: I0129 08:58:57.852988 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009443 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbqt6\" (UniqueName: \"kubernetes.io/projected/205e527c-d0a7-4b85-9542-19a871c61693-kube-api-access-pbqt6\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009512 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009548 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205e527c-d0a7-4b85-9542-19a871c61693-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009568 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/205e527c-d0a7-4b85-9542-19a871c61693-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009591 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/205e527c-d0a7-4b85-9542-19a871c61693-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009614 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009649 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cbcad4af-7c93-4d6e-b825-42a586db5d81-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009689 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009713 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/205e527c-d0a7-4b85-9542-19a871c61693-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009734 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009754 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009770 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205e527c-d0a7-4b85-9542-19a871c61693-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009821 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009843 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009860 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2wvv\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-kube-api-access-q2wvv\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009878 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cbcad4af-7c93-4d6e-b825-42a586db5d81-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009896 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009947 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/205e527c-d0a7-4b85-9542-19a871c61693-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.009991 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.012963 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.014348 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.015213 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.017594 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.019592 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.020158 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.021277 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.024876 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cbcad4af-7c93-4d6e-b825-42a586db5d81-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.042088 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cbcad4af-7c93-4d6e-b825-42a586db5d81-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.042410 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.049899 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2wvv\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-kube-api-access-q2wvv\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.066685 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.096364 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.110336 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.112273 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/205e527c-d0a7-4b85-9542-19a871c61693-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.112347 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbqt6\" (UniqueName: \"kubernetes.io/projected/205e527c-d0a7-4b85-9542-19a871c61693-kube-api-access-pbqt6\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.112383 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205e527c-d0a7-4b85-9542-19a871c61693-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.113906 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.114418 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.114581 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-r9gg6" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.114575 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/205e527c-d0a7-4b85-9542-19a871c61693-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.114771 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/205e527c-d0a7-4b85-9542-19a871c61693-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.114798 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.114886 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/205e527c-d0a7-4b85-9542-19a871c61693-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.114948 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205e527c-d0a7-4b85-9542-19a871c61693-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.116654 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205e527c-d0a7-4b85-9542-19a871c61693-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.117368 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/205e527c-d0a7-4b85-9542-19a871c61693-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.117467 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.119364 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.120714 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205e527c-d0a7-4b85-9542-19a871c61693-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.121357 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/205e527c-d0a7-4b85-9542-19a871c61693-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.121875 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/205e527c-d0a7-4b85-9542-19a871c61693-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.131020 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/205e527c-d0a7-4b85-9542-19a871c61693-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.170939 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.187531 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbqt6\" (UniqueName: \"kubernetes.io/projected/205e527c-d0a7-4b85-9542-19a871c61693-kube-api-access-pbqt6\") pod \"openstack-cell1-galera-0\" (UID: \"205e527c-d0a7-4b85-9542-19a871c61693\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.216815 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d720a04a-6de4-4dd9-b918-471d3d69de73-kolla-config\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.216895 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d720a04a-6de4-4dd9-b918-471d3d69de73-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.216933 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d720a04a-6de4-4dd9-b918-471d3d69de73-config-data\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.216967 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d720a04a-6de4-4dd9-b918-471d3d69de73-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.217002 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x77hn\" (UniqueName: \"kubernetes.io/projected/d720a04a-6de4-4dd9-b918-471d3d69de73-kube-api-access-x77hn\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.319022 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d720a04a-6de4-4dd9-b918-471d3d69de73-kolla-config\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.319094 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d720a04a-6de4-4dd9-b918-471d3d69de73-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.319119 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d720a04a-6de4-4dd9-b918-471d3d69de73-config-data\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.319158 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d720a04a-6de4-4dd9-b918-471d3d69de73-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.319194 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x77hn\" (UniqueName: \"kubernetes.io/projected/d720a04a-6de4-4dd9-b918-471d3d69de73-kube-api-access-x77hn\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.320465 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d720a04a-6de4-4dd9-b918-471d3d69de73-kolla-config\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.322868 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d720a04a-6de4-4dd9-b918-471d3d69de73-config-data\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.449807 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.451966 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d720a04a-6de4-4dd9-b918-471d3d69de73-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.459226 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x77hn\" (UniqueName: \"kubernetes.io/projected/d720a04a-6de4-4dd9-b918-471d3d69de73-kube-api-access-x77hn\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.465803 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d720a04a-6de4-4dd9-b918-471d3d69de73-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d720a04a-6de4-4dd9-b918-471d3d69de73\") " pod="openstack/memcached-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.466234 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 08:58:58 crc kubenswrapper[4895]: I0129 08:58:58.754163 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 08:58:59 crc kubenswrapper[4895]: W0129 08:58:59.429754 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod996b2ef7_6f00_4cbf_b8b7_4d9bb3360c94.slice/crio-4a02c8044031e201140151ef6e773db81a4f444c24aea6793ebd45b7dcb09598 WatchSource:0}: Error finding container 4a02c8044031e201140151ef6e773db81a4f444c24aea6793ebd45b7dcb09598: Status 404 returned error can't find the container with id 4a02c8044031e201140151ef6e773db81a4f444c24aea6793ebd45b7dcb09598 Jan 29 08:58:59 crc kubenswrapper[4895]: I0129 08:58:59.472055 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 08:58:59 crc kubenswrapper[4895]: I0129 08:58:59.558133 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94","Type":"ContainerStarted","Data":"4a02c8044031e201140151ef6e773db81a4f444c24aea6793ebd45b7dcb09598"} Jan 29 08:58:59 crc kubenswrapper[4895]: I0129 08:58:59.671890 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:58:59 crc kubenswrapper[4895]: I0129 08:58:59.874038 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 08:58:59 crc kubenswrapper[4895]: I0129 08:58:59.886627 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 08:58:59 crc kubenswrapper[4895]: W0129 08:58:59.897526 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod205e527c_d0a7_4b85_9542_19a871c61693.slice/crio-6035f7d55f70460e9c7669897e22bb44f70a26482e398dbd3461c127f481d051 WatchSource:0}: Error finding container 6035f7d55f70460e9c7669897e22bb44f70a26482e398dbd3461c127f481d051: Status 404 returned error can't find the container with id 6035f7d55f70460e9c7669897e22bb44f70a26482e398dbd3461c127f481d051 Jan 29 08:58:59 crc kubenswrapper[4895]: W0129 08:58:59.930142 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd720a04a_6de4_4dd9_b918_471d3d69de73.slice/crio-0ff7ed1938b7b8c6e0c2826ce0205ac46440475ffe442becee15aa957405b3f4 WatchSource:0}: Error finding container 0ff7ed1938b7b8c6e0c2826ce0205ac46440475ffe442becee15aa957405b3f4: Status 404 returned error can't find the container with id 0ff7ed1938b7b8c6e0c2826ce0205ac46440475ffe442becee15aa957405b3f4 Jan 29 08:59:00 crc kubenswrapper[4895]: I0129 08:59:00.437563 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:59:00 crc kubenswrapper[4895]: I0129 08:59:00.447231 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 08:59:00 crc kubenswrapper[4895]: I0129 08:59:00.459612 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-qqlcq" Jan 29 08:59:00 crc kubenswrapper[4895]: I0129 08:59:00.480536 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:59:00 crc kubenswrapper[4895]: I0129 08:59:00.597422 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkg97\" (UniqueName: \"kubernetes.io/projected/598e3a01-9620-4320-b00b-ac10baddb593-kube-api-access-vkg97\") pod \"kube-state-metrics-0\" (UID: \"598e3a01-9620-4320-b00b-ac10baddb593\") " pod="openstack/kube-state-metrics-0" Jan 29 08:59:00 crc kubenswrapper[4895]: I0129 08:59:00.618125 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cbcad4af-7c93-4d6e-b825-42a586db5d81","Type":"ContainerStarted","Data":"3fb13d0621aed7e5fc30264d2ec0a15b09d9ea7753e5aea5b2ee3a86d4d6ea94"} Jan 29 08:59:00 crc kubenswrapper[4895]: I0129 08:59:00.619130 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d720a04a-6de4-4dd9-b918-471d3d69de73","Type":"ContainerStarted","Data":"0ff7ed1938b7b8c6e0c2826ce0205ac46440475ffe442becee15aa957405b3f4"} Jan 29 08:59:00 crc kubenswrapper[4895]: I0129 08:59:00.620617 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"205e527c-d0a7-4b85-9542-19a871c61693","Type":"ContainerStarted","Data":"6035f7d55f70460e9c7669897e22bb44f70a26482e398dbd3461c127f481d051"} Jan 29 08:59:00 crc kubenswrapper[4895]: I0129 08:59:00.698581 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkg97\" (UniqueName: \"kubernetes.io/projected/598e3a01-9620-4320-b00b-ac10baddb593-kube-api-access-vkg97\") pod \"kube-state-metrics-0\" (UID: \"598e3a01-9620-4320-b00b-ac10baddb593\") " pod="openstack/kube-state-metrics-0" Jan 29 08:59:00 crc kubenswrapper[4895]: I0129 08:59:00.818451 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkg97\" (UniqueName: \"kubernetes.io/projected/598e3a01-9620-4320-b00b-ac10baddb593-kube-api-access-vkg97\") pod \"kube-state-metrics-0\" (UID: \"598e3a01-9620-4320-b00b-ac10baddb593\") " pod="openstack/kube-state-metrics-0" Jan 29 08:59:01 crc kubenswrapper[4895]: I0129 08:59:01.098427 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 08:59:02 crc kubenswrapper[4895]: I0129 08:59:02.672702 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:59:03 crc kubenswrapper[4895]: I0129 08:59:03.750138 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"598e3a01-9620-4320-b00b-ac10baddb593","Type":"ContainerStarted","Data":"0e0c6e3fa6ed37f6c114da48aed29149b7bee4039d3d0f4081b4587c3ca08973"} Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.162971 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mjz6w"] Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.164142 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.169649 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.169941 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-z6zpz" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.170102 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.371328 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mjz6w"] Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.406582 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rzm2l"] Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.409214 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.439272 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rzm2l"] Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.461003 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f71eedb-46ac-474f-9d1e-d4909a49e05b-combined-ca-bundle\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.461119 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5f71eedb-46ac-474f-9d1e-d4909a49e05b-var-run-ovn\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.461148 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f71eedb-46ac-474f-9d1e-d4909a49e05b-var-run\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.461195 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f71eedb-46ac-474f-9d1e-d4909a49e05b-scripts\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.461277 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvxn\" (UniqueName: \"kubernetes.io/projected/5f71eedb-46ac-474f-9d1e-d4909a49e05b-kube-api-access-9xvxn\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.461309 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f71eedb-46ac-474f-9d1e-d4909a49e05b-ovn-controller-tls-certs\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.461400 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5f71eedb-46ac-474f-9d1e-d4909a49e05b-var-log-ovn\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.567997 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b283d44c-d996-450c-9b6c-dea58fe633a7-scripts\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.568061 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b283d44c-d996-450c-9b6c-dea58fe633a7-var-lib\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.568090 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b283d44c-d996-450c-9b6c-dea58fe633a7-etc-ovs\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.568146 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f71eedb-46ac-474f-9d1e-d4909a49e05b-scripts\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.568208 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b283d44c-d996-450c-9b6c-dea58fe633a7-var-run\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.568242 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvxn\" (UniqueName: \"kubernetes.io/projected/5f71eedb-46ac-474f-9d1e-d4909a49e05b-kube-api-access-9xvxn\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.568275 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f71eedb-46ac-474f-9d1e-d4909a49e05b-ovn-controller-tls-certs\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.568307 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b283d44c-d996-450c-9b6c-dea58fe633a7-var-log\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.568367 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5f71eedb-46ac-474f-9d1e-d4909a49e05b-var-log-ovn\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.568393 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlrlb\" (UniqueName: \"kubernetes.io/projected/b283d44c-d996-450c-9b6c-dea58fe633a7-kube-api-access-vlrlb\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.568421 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f71eedb-46ac-474f-9d1e-d4909a49e05b-combined-ca-bundle\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.568461 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5f71eedb-46ac-474f-9d1e-d4909a49e05b-var-run-ovn\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.568497 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f71eedb-46ac-474f-9d1e-d4909a49e05b-var-run\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.569280 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f71eedb-46ac-474f-9d1e-d4909a49e05b-var-run\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.577716 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f71eedb-46ac-474f-9d1e-d4909a49e05b-scripts\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.586461 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5f71eedb-46ac-474f-9d1e-d4909a49e05b-var-log-ovn\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.587036 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5f71eedb-46ac-474f-9d1e-d4909a49e05b-var-run-ovn\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.612180 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f71eedb-46ac-474f-9d1e-d4909a49e05b-combined-ca-bundle\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.616571 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f71eedb-46ac-474f-9d1e-d4909a49e05b-ovn-controller-tls-certs\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.636012 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvxn\" (UniqueName: \"kubernetes.io/projected/5f71eedb-46ac-474f-9d1e-d4909a49e05b-kube-api-access-9xvxn\") pod \"ovn-controller-mjz6w\" (UID: \"5f71eedb-46ac-474f-9d1e-d4909a49e05b\") " pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.676021 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b283d44c-d996-450c-9b6c-dea58fe633a7-etc-ovs\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.676435 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b283d44c-d996-450c-9b6c-dea58fe633a7-var-run\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.676548 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b283d44c-d996-450c-9b6c-dea58fe633a7-var-log\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.676671 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlrlb\" (UniqueName: \"kubernetes.io/projected/b283d44c-d996-450c-9b6c-dea58fe633a7-kube-api-access-vlrlb\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.676802 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b283d44c-d996-450c-9b6c-dea58fe633a7-scripts\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.676903 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b283d44c-d996-450c-9b6c-dea58fe633a7-var-lib\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.677339 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b283d44c-d996-450c-9b6c-dea58fe633a7-var-log\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.677474 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b283d44c-d996-450c-9b6c-dea58fe633a7-var-lib\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.677620 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b283d44c-d996-450c-9b6c-dea58fe633a7-etc-ovs\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.677708 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b283d44c-d996-450c-9b6c-dea58fe633a7-var-run\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.685911 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b283d44c-d996-450c-9b6c-dea58fe633a7-scripts\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.708470 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.747164 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlrlb\" (UniqueName: \"kubernetes.io/projected/b283d44c-d996-450c-9b6c-dea58fe633a7-kube-api-access-vlrlb\") pod \"ovn-controller-ovs-rzm2l\" (UID: \"b283d44c-d996-450c-9b6c-dea58fe633a7\") " pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:04 crc kubenswrapper[4895]: I0129 08:59:04.788757 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:05 crc kubenswrapper[4895]: I0129 08:59:05.913640 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 08:59:05 crc kubenswrapper[4895]: I0129 08:59:05.930176 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:05 crc kubenswrapper[4895]: I0129 08:59:05.934711 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 08:59:05 crc kubenswrapper[4895]: I0129 08:59:05.934857 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 08:59:05 crc kubenswrapper[4895]: I0129 08:59:05.935513 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-5j7hd" Jan 29 08:59:05 crc kubenswrapper[4895]: I0129 08:59:05.936994 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 08:59:05 crc kubenswrapper[4895]: I0129 08:59:05.938850 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 08:59:05 crc kubenswrapper[4895]: I0129 08:59:05.941371 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.070309 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.070406 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/250930c1-98a4-4b5d-a0d7-0ba3063bc098-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.070451 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/250930c1-98a4-4b5d-a0d7-0ba3063bc098-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.070508 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/250930c1-98a4-4b5d-a0d7-0ba3063bc098-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.070544 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250930c1-98a4-4b5d-a0d7-0ba3063bc098-config\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.070570 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250930c1-98a4-4b5d-a0d7-0ba3063bc098-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.070593 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/250930c1-98a4-4b5d-a0d7-0ba3063bc098-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.070638 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rww2\" (UniqueName: \"kubernetes.io/projected/250930c1-98a4-4b5d-a0d7-0ba3063bc098-kube-api-access-8rww2\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.177154 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/250930c1-98a4-4b5d-a0d7-0ba3063bc098-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.177215 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/250930c1-98a4-4b5d-a0d7-0ba3063bc098-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.177262 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/250930c1-98a4-4b5d-a0d7-0ba3063bc098-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.177294 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250930c1-98a4-4b5d-a0d7-0ba3063bc098-config\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.177318 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250930c1-98a4-4b5d-a0d7-0ba3063bc098-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.177345 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/250930c1-98a4-4b5d-a0d7-0ba3063bc098-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.177372 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rww2\" (UniqueName: \"kubernetes.io/projected/250930c1-98a4-4b5d-a0d7-0ba3063bc098-kube-api-access-8rww2\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.177487 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.181115 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.181638 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/250930c1-98a4-4b5d-a0d7-0ba3063bc098-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.183086 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250930c1-98a4-4b5d-a0d7-0ba3063bc098-config\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.185299 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/250930c1-98a4-4b5d-a0d7-0ba3063bc098-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.185615 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/250930c1-98a4-4b5d-a0d7-0ba3063bc098-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.188035 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/250930c1-98a4-4b5d-a0d7-0ba3063bc098-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.215401 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.230094 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250930c1-98a4-4b5d-a0d7-0ba3063bc098-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.243617 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rww2\" (UniqueName: \"kubernetes.io/projected/250930c1-98a4-4b5d-a0d7-0ba3063bc098-kube-api-access-8rww2\") pod \"ovsdbserver-sb-0\" (UID: \"250930c1-98a4-4b5d-a0d7-0ba3063bc098\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:06 crc kubenswrapper[4895]: I0129 08:59:06.280852 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.540429 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.541969 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.546812 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-dj6g2" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.547021 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.547199 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.547353 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.585064 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.742150 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/877924c3-f4b2-4040-8b6c-bbc80d6d58af-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.742233 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/877924c3-f4b2-4040-8b6c-bbc80d6d58af-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.742278 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/877924c3-f4b2-4040-8b6c-bbc80d6d58af-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.742313 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877924c3-f4b2-4040-8b6c-bbc80d6d58af-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.742387 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nttqc\" (UniqueName: \"kubernetes.io/projected/877924c3-f4b2-4040-8b6c-bbc80d6d58af-kube-api-access-nttqc\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.742443 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877924c3-f4b2-4040-8b6c-bbc80d6d58af-config\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.742475 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/877924c3-f4b2-4040-8b6c-bbc80d6d58af-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.742518 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.845059 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877924c3-f4b2-4040-8b6c-bbc80d6d58af-config\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.845181 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/877924c3-f4b2-4040-8b6c-bbc80d6d58af-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.845234 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.845304 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/877924c3-f4b2-4040-8b6c-bbc80d6d58af-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.845383 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/877924c3-f4b2-4040-8b6c-bbc80d6d58af-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.845419 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/877924c3-f4b2-4040-8b6c-bbc80d6d58af-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.845452 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877924c3-f4b2-4040-8b6c-bbc80d6d58af-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.845523 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nttqc\" (UniqueName: \"kubernetes.io/projected/877924c3-f4b2-4040-8b6c-bbc80d6d58af-kube-api-access-nttqc\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.845651 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.846415 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/877924c3-f4b2-4040-8b6c-bbc80d6d58af-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.846590 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877924c3-f4b2-4040-8b6c-bbc80d6d58af-config\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.847457 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/877924c3-f4b2-4040-8b6c-bbc80d6d58af-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.853222 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/877924c3-f4b2-4040-8b6c-bbc80d6d58af-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.853715 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/877924c3-f4b2-4040-8b6c-bbc80d6d58af-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.855295 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877924c3-f4b2-4040-8b6c-bbc80d6d58af-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.870300 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nttqc\" (UniqueName: \"kubernetes.io/projected/877924c3-f4b2-4040-8b6c-bbc80d6d58af-kube-api-access-nttqc\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:07 crc kubenswrapper[4895]: I0129 08:59:07.889628 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"877924c3-f4b2-4040-8b6c-bbc80d6d58af\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:08 crc kubenswrapper[4895]: I0129 08:59:08.189251 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:16 crc kubenswrapper[4895]: I0129 08:59:16.131654 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rzm2l"] Jan 29 08:59:16 crc kubenswrapper[4895]: I0129 08:59:16.543892 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rzm2l" event={"ID":"b283d44c-d996-450c-9b6c-dea58fe633a7","Type":"ContainerStarted","Data":"6ad8cc88e0cc43fbcbb3f4215080630319dfc601a9cb8a5ad749056f32d25152"} Jan 29 08:59:16 crc kubenswrapper[4895]: I0129 08:59:16.882859 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mjz6w"] Jan 29 08:59:24 crc kubenswrapper[4895]: E0129 08:59:24.786163 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 29 08:59:24 crc kubenswrapper[4895]: E0129 08:59:24.787158 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2wvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(cbcad4af-7c93-4d6e-b825-42a586db5d81): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:59:24 crc kubenswrapper[4895]: E0129 08:59:24.788502 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cbcad4af-7c93-4d6e-b825-42a586db5d81" Jan 29 08:59:25 crc kubenswrapper[4895]: E0129 08:59:25.692518 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cbcad4af-7c93-4d6e-b825-42a586db5d81" Jan 29 08:59:27 crc kubenswrapper[4895]: E0129 08:59:27.955895 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 29 08:59:27 crc kubenswrapper[4895]: E0129 08:59:27.956528 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dbllc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:59:27 crc kubenswrapper[4895]: E0129 08:59:27.966909 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94" Jan 29 08:59:28 crc kubenswrapper[4895]: E0129 08:59:28.721212 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94" Jan 29 08:59:28 crc kubenswrapper[4895]: W0129 08:59:28.860476 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f71eedb_46ac_474f_9d1e_d4909a49e05b.slice/crio-05adc2356624d3e6bf3e4caf49be3dad433a12d2e9c82a24a55d448427405909 WatchSource:0}: Error finding container 05adc2356624d3e6bf3e4caf49be3dad433a12d2e9c82a24a55d448427405909: Status 404 returned error can't find the container with id 05adc2356624d3e6bf3e4caf49be3dad433a12d2e9c82a24a55d448427405909 Jan 29 08:59:29 crc kubenswrapper[4895]: I0129 08:59:29.727541 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mjz6w" event={"ID":"5f71eedb-46ac-474f-9d1e-d4909a49e05b","Type":"ContainerStarted","Data":"05adc2356624d3e6bf3e4caf49be3dad433a12d2e9c82a24a55d448427405909"} Jan 29 08:59:34 crc kubenswrapper[4895]: E0129 08:59:34.975497 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 29 08:59:34 crc kubenswrapper[4895]: E0129 08:59:34.976434 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbqt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(205e527c-d0a7-4b85-9542-19a871c61693): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:59:34 crc kubenswrapper[4895]: E0129 08:59:34.978001 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="205e527c-d0a7-4b85-9542-19a871c61693" Jan 29 08:59:35 crc kubenswrapper[4895]: E0129 08:59:35.579773 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 29 08:59:35 crc kubenswrapper[4895]: E0129 08:59:35.580176 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n54bh5cfh678h55ch57fh58dh5d9h67bh5c9h646h66fhbfh655hd9h59bh5d9h6h578h56bh57bh547h547h5cdh56dh75h66dhb5h7fh5cdh64bh6bh5dcq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x77hn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(d720a04a-6de4-4dd9-b918-471d3d69de73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:59:35 crc kubenswrapper[4895]: E0129 08:59:35.581463 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="d720a04a-6de4-4dd9-b918-471d3d69de73" Jan 29 08:59:35 crc kubenswrapper[4895]: E0129 08:59:35.652422 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 29 08:59:35 crc kubenswrapper[4895]: E0129 08:59:35.652624 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntw6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(7d3ea6f8-e1cd-41fe-8169-00fc80c995b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:59:35 crc kubenswrapper[4895]: E0129 08:59:35.653892 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:35.997425 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="205e527c-d0a7-4b85-9542-19a871c61693" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:35.997456 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:36.007010 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="d720a04a-6de4-4dd9-b918-471d3d69de73" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:36.681125 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:36.681810 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsd4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-8msnl_openstack(bd6868b7-0c63-4cbf-830d-a167983e116d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:36.683019 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-8msnl" podUID="bd6868b7-0c63-4cbf-830d-a167983e116d" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:36.686846 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:36.687093 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdxvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-ztblm_openstack(6fd978bd-9d18-4f1c-a8c5-e071e14ea337): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:36.688341 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-ztblm" podUID="6fd978bd-9d18-4f1c-a8c5-e071e14ea337" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:36.753639 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:36.753880 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4mnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-957ss_openstack(21a3d792-6aca-4649-8321-8ee399ce37d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:36.755090 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-957ss" podUID="21a3d792-6aca-4649-8321-8ee399ce37d6" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:36.805050 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:36.805327 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mnxnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-ncvl9_openstack(64de6c11-4446-427a-be04-3e23713ab128): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:59:36 crc kubenswrapper[4895]: E0129 08:59:36.806575 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" podUID="64de6c11-4446-427a-be04-3e23713ab128" Jan 29 08:59:37 crc kubenswrapper[4895]: E0129 08:59:37.005246 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-957ss" podUID="21a3d792-6aca-4649-8321-8ee399ce37d6" Jan 29 08:59:37 crc kubenswrapper[4895]: E0129 08:59:37.006339 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-8msnl" podUID="bd6868b7-0c63-4cbf-830d-a167983e116d" Jan 29 08:59:37 crc kubenswrapper[4895]: I0129 08:59:37.471848 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 08:59:37 crc kubenswrapper[4895]: I0129 08:59:37.713606 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 08:59:38 crc kubenswrapper[4895]: I0129 08:59:38.800809 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-ztblm" Jan 29 08:59:38 crc kubenswrapper[4895]: I0129 08:59:38.898777 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" Jan 29 08:59:38 crc kubenswrapper[4895]: I0129 08:59:38.901881 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdxvm\" (UniqueName: \"kubernetes.io/projected/6fd978bd-9d18-4f1c-a8c5-e071e14ea337-kube-api-access-hdxvm\") pod \"6fd978bd-9d18-4f1c-a8c5-e071e14ea337\" (UID: \"6fd978bd-9d18-4f1c-a8c5-e071e14ea337\") " Jan 29 08:59:38 crc kubenswrapper[4895]: I0129 08:59:38.902273 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd978bd-9d18-4f1c-a8c5-e071e14ea337-config\") pod \"6fd978bd-9d18-4f1c-a8c5-e071e14ea337\" (UID: \"6fd978bd-9d18-4f1c-a8c5-e071e14ea337\") " Jan 29 08:59:38 crc kubenswrapper[4895]: I0129 08:59:38.902847 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd978bd-9d18-4f1c-a8c5-e071e14ea337-config" (OuterVolumeSpecName: "config") pod "6fd978bd-9d18-4f1c-a8c5-e071e14ea337" (UID: "6fd978bd-9d18-4f1c-a8c5-e071e14ea337"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:38 crc kubenswrapper[4895]: I0129 08:59:38.904208 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd978bd-9d18-4f1c-a8c5-e071e14ea337-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:38 crc kubenswrapper[4895]: I0129 08:59:38.911080 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd978bd-9d18-4f1c-a8c5-e071e14ea337-kube-api-access-hdxvm" (OuterVolumeSpecName: "kube-api-access-hdxvm") pod "6fd978bd-9d18-4f1c-a8c5-e071e14ea337" (UID: "6fd978bd-9d18-4f1c-a8c5-e071e14ea337"). InnerVolumeSpecName "kube-api-access-hdxvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.005057 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64de6c11-4446-427a-be04-3e23713ab128-dns-svc\") pod \"64de6c11-4446-427a-be04-3e23713ab128\" (UID: \"64de6c11-4446-427a-be04-3e23713ab128\") " Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.005136 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64de6c11-4446-427a-be04-3e23713ab128-config\") pod \"64de6c11-4446-427a-be04-3e23713ab128\" (UID: \"64de6c11-4446-427a-be04-3e23713ab128\") " Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.005181 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnxnf\" (UniqueName: \"kubernetes.io/projected/64de6c11-4446-427a-be04-3e23713ab128-kube-api-access-mnxnf\") pod \"64de6c11-4446-427a-be04-3e23713ab128\" (UID: \"64de6c11-4446-427a-be04-3e23713ab128\") " Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.005417 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdxvm\" (UniqueName: \"kubernetes.io/projected/6fd978bd-9d18-4f1c-a8c5-e071e14ea337-kube-api-access-hdxvm\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.005949 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64de6c11-4446-427a-be04-3e23713ab128-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64de6c11-4446-427a-be04-3e23713ab128" (UID: "64de6c11-4446-427a-be04-3e23713ab128"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.006061 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64de6c11-4446-427a-be04-3e23713ab128-config" (OuterVolumeSpecName: "config") pod "64de6c11-4446-427a-be04-3e23713ab128" (UID: "64de6c11-4446-427a-be04-3e23713ab128"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.009793 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64de6c11-4446-427a-be04-3e23713ab128-kube-api-access-mnxnf" (OuterVolumeSpecName: "kube-api-access-mnxnf") pod "64de6c11-4446-427a-be04-3e23713ab128" (UID: "64de6c11-4446-427a-be04-3e23713ab128"). InnerVolumeSpecName "kube-api-access-mnxnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.029454 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-ztblm" Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.029453 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-ztblm" event={"ID":"6fd978bd-9d18-4f1c-a8c5-e071e14ea337","Type":"ContainerDied","Data":"6041f7600ebc6ab9f919f91014e1b005eaf978a8edb3e8167dd4b72a35362b43"} Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.031555 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"877924c3-f4b2-4040-8b6c-bbc80d6d58af","Type":"ContainerStarted","Data":"46e3425e8192f49ad169dd4e7fffe1be2b56a90172651afe70f0e1f2dd3d3820"} Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.033058 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"250930c1-98a4-4b5d-a0d7-0ba3063bc098","Type":"ContainerStarted","Data":"4c37ba1f2c1943687b5dc89ce9a9d0f9edb678e2ba1315788ce92eb330b66efc"} Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.034702 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" event={"ID":"64de6c11-4446-427a-be04-3e23713ab128","Type":"ContainerDied","Data":"972ab06988612ed3b1c511c2d88b4458cd8ef1533b00cb6118bc493cbc4bf418"} Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.034772 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ncvl9" Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.106607 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64de6c11-4446-427a-be04-3e23713ab128-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.106655 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64de6c11-4446-427a-be04-3e23713ab128-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.106666 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnxnf\" (UniqueName: \"kubernetes.io/projected/64de6c11-4446-427a-be04-3e23713ab128-kube-api-access-mnxnf\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.129366 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ztblm"] Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.146976 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ztblm"] Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.176310 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ncvl9"] Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.183814 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ncvl9"] Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.223576 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64de6c11-4446-427a-be04-3e23713ab128" path="/var/lib/kubelet/pods/64de6c11-4446-427a-be04-3e23713ab128/volumes" Jan 29 08:59:39 crc kubenswrapper[4895]: I0129 08:59:39.224123 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd978bd-9d18-4f1c-a8c5-e071e14ea337" path="/var/lib/kubelet/pods/6fd978bd-9d18-4f1c-a8c5-e071e14ea337/volumes" Jan 29 08:59:39 crc kubenswrapper[4895]: E0129 08:59:39.832789 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 29 08:59:39 crc kubenswrapper[4895]: E0129 08:59:39.832850 4895 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 29 08:59:39 crc kubenswrapper[4895]: E0129 08:59:39.832998 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vkg97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(598e3a01-9620-4320-b00b-ac10baddb593): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 08:59:39 crc kubenswrapper[4895]: E0129 08:59:39.834229 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="598e3a01-9620-4320-b00b-ac10baddb593" Jan 29 08:59:40 crc kubenswrapper[4895]: E0129 08:59:40.060263 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="598e3a01-9620-4320-b00b-ac10baddb593" Jan 29 08:59:41 crc kubenswrapper[4895]: I0129 08:59:41.067197 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mjz6w" event={"ID":"5f71eedb-46ac-474f-9d1e-d4909a49e05b","Type":"ContainerStarted","Data":"b0f379ed566b2c06a19c23195a6c981639c3cad8202b08b54a3435064d5d6189"} Jan 29 08:59:41 crc kubenswrapper[4895]: I0129 08:59:41.067669 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-mjz6w" Jan 29 08:59:41 crc kubenswrapper[4895]: I0129 08:59:41.069702 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"250930c1-98a4-4b5d-a0d7-0ba3063bc098","Type":"ContainerStarted","Data":"5d6701e905b73ec6b97e825bff117f281ab9d919b8a4c7aa34c500ad6fe37823"} Jan 29 08:59:41 crc kubenswrapper[4895]: I0129 08:59:41.071311 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"877924c3-f4b2-4040-8b6c-bbc80d6d58af","Type":"ContainerStarted","Data":"ab26176e73906cfe471e3b9c92a7f34c4c25e6d87babac9fe351a3adba115b9e"} Jan 29 08:59:41 crc kubenswrapper[4895]: I0129 08:59:41.073462 4895 generic.go:334] "Generic (PLEG): container finished" podID="b283d44c-d996-450c-9b6c-dea58fe633a7" containerID="7a482f215fd0f2460e35e7cdcacd5b82bc7d1d665e191b4b357fc7e134701140" exitCode=0 Jan 29 08:59:41 crc kubenswrapper[4895]: I0129 08:59:41.073520 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rzm2l" event={"ID":"b283d44c-d996-450c-9b6c-dea58fe633a7","Type":"ContainerDied","Data":"7a482f215fd0f2460e35e7cdcacd5b82bc7d1d665e191b4b357fc7e134701140"} Jan 29 08:59:41 crc kubenswrapper[4895]: I0129 08:59:41.096699 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mjz6w" podStartSLOduration=26.141172371 podStartE2EDuration="37.09666919s" podCreationTimestamp="2026-01-29 08:59:04 +0000 UTC" firstStartedPulling="2026-01-29 08:59:28.866261817 +0000 UTC m=+1110.507769963" lastFinishedPulling="2026-01-29 08:59:39.821758626 +0000 UTC m=+1121.463266782" observedRunningTime="2026-01-29 08:59:41.092070958 +0000 UTC m=+1122.733579124" watchObservedRunningTime="2026-01-29 08:59:41.09666919 +0000 UTC m=+1122.738177326" Jan 29 08:59:42 crc kubenswrapper[4895]: I0129 08:59:42.089942 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rzm2l" event={"ID":"b283d44c-d996-450c-9b6c-dea58fe633a7","Type":"ContainerStarted","Data":"ed37fd183438df1a16c1c4cfaf6a3fcc99d332e003e17be29a50614d9f5699dd"} Jan 29 08:59:42 crc kubenswrapper[4895]: I0129 08:59:42.090802 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rzm2l" event={"ID":"b283d44c-d996-450c-9b6c-dea58fe633a7","Type":"ContainerStarted","Data":"3354884296ca1f57bd03c00cd314c8ce2e6959844f5b90b576ce27ff66b0afeb"} Jan 29 08:59:42 crc kubenswrapper[4895]: I0129 08:59:42.140222 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rzm2l" podStartSLOduration=14.587963856 podStartE2EDuration="38.140191129s" podCreationTimestamp="2026-01-29 08:59:04 +0000 UTC" firstStartedPulling="2026-01-29 08:59:16.269169354 +0000 UTC m=+1097.910677500" lastFinishedPulling="2026-01-29 08:59:39.821396627 +0000 UTC m=+1121.462904773" observedRunningTime="2026-01-29 08:59:42.133457669 +0000 UTC m=+1123.774965835" watchObservedRunningTime="2026-01-29 08:59:42.140191129 +0000 UTC m=+1123.781699275" Jan 29 08:59:43 crc kubenswrapper[4895]: I0129 08:59:43.100756 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cbcad4af-7c93-4d6e-b825-42a586db5d81","Type":"ContainerStarted","Data":"26d34178f24362025be3c60472a15a6f3b96f11f999bca0c1b399079c33299d8"} Jan 29 08:59:43 crc kubenswrapper[4895]: I0129 08:59:43.102149 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:43 crc kubenswrapper[4895]: I0129 08:59:43.102191 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 08:59:44 crc kubenswrapper[4895]: I0129 08:59:44.124551 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"250930c1-98a4-4b5d-a0d7-0ba3063bc098","Type":"ContainerStarted","Data":"0fc66f513fb2e0ba75b2dead4f33a2a8b3fd7f788015ddc6741fef556e825031"} Jan 29 08:59:44 crc kubenswrapper[4895]: I0129 08:59:44.127533 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94","Type":"ContainerStarted","Data":"4a4030b32273ccf58bddebf098d82ec9052256a4e9bb5478eb7c4e4ee36cb5fe"} Jan 29 08:59:44 crc kubenswrapper[4895]: I0129 08:59:44.165678 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=36.356068612 podStartE2EDuration="41.165648453s" podCreationTimestamp="2026-01-29 08:59:03 +0000 UTC" firstStartedPulling="2026-01-29 08:59:38.876933456 +0000 UTC m=+1120.518441602" lastFinishedPulling="2026-01-29 08:59:43.686513297 +0000 UTC m=+1125.328021443" observedRunningTime="2026-01-29 08:59:44.157399352 +0000 UTC m=+1125.798907508" watchObservedRunningTime="2026-01-29 08:59:44.165648453 +0000 UTC m=+1125.807156599" Jan 29 08:59:45 crc kubenswrapper[4895]: I0129 08:59:45.142706 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"877924c3-f4b2-4040-8b6c-bbc80d6d58af","Type":"ContainerStarted","Data":"6403476cff65233472e54dde49abee543dc360622156c7803b10bcb7bd256507"} Jan 29 08:59:45 crc kubenswrapper[4895]: I0129 08:59:45.178898 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=34.382626065 podStartE2EDuration="39.178852131s" podCreationTimestamp="2026-01-29 08:59:06 +0000 UTC" firstStartedPulling="2026-01-29 08:59:38.87676986 +0000 UTC m=+1120.518278006" lastFinishedPulling="2026-01-29 08:59:43.672995926 +0000 UTC m=+1125.314504072" observedRunningTime="2026-01-29 08:59:45.169938093 +0000 UTC m=+1126.811446259" watchObservedRunningTime="2026-01-29 08:59:45.178852131 +0000 UTC m=+1126.820360297" Jan 29 08:59:45 crc kubenswrapper[4895]: I0129 08:59:45.284431 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:45 crc kubenswrapper[4895]: I0129 08:59:45.358127 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.021255 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.021462 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.147398 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.186976 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.569587 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8msnl"] Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.602652 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-lc26n"] Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.604370 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.611776 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.614302 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-d9mcq"] Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.615981 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.633868 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lc26n"] Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.635714 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.650999 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-d9mcq"] Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.726396 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/50cc7d34-44f8-490c-a18c-2d747721d20a-ovs-rundir\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.726502 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cc7d34-44f8-490c-a18c-2d747721d20a-combined-ca-bundle\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.726545 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-config\") pod \"dnsmasq-dns-6bc7876d45-d9mcq\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.726576 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-d9mcq\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.726597 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50cc7d34-44f8-490c-a18c-2d747721d20a-config\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.726651 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-d9mcq\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.726670 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50cc7d34-44f8-490c-a18c-2d747721d20a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.726770 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/50cc7d34-44f8-490c-a18c-2d747721d20a-ovn-rundir\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.726799 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gwtb\" (UniqueName: \"kubernetes.io/projected/50cc7d34-44f8-490c-a18c-2d747721d20a-kube-api-access-7gwtb\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.726940 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btsdx\" (UniqueName: \"kubernetes.io/projected/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-kube-api-access-btsdx\") pod \"dnsmasq-dns-6bc7876d45-d9mcq\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.829225 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/50cc7d34-44f8-490c-a18c-2d747721d20a-ovn-rundir\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.829291 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwtb\" (UniqueName: \"kubernetes.io/projected/50cc7d34-44f8-490c-a18c-2d747721d20a-kube-api-access-7gwtb\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.829321 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btsdx\" (UniqueName: \"kubernetes.io/projected/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-kube-api-access-btsdx\") pod \"dnsmasq-dns-6bc7876d45-d9mcq\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.829374 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/50cc7d34-44f8-490c-a18c-2d747721d20a-ovs-rundir\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.829412 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cc7d34-44f8-490c-a18c-2d747721d20a-combined-ca-bundle\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.829438 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-config\") pod \"dnsmasq-dns-6bc7876d45-d9mcq\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.829466 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-d9mcq\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.829486 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50cc7d34-44f8-490c-a18c-2d747721d20a-config\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.829520 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-d9mcq\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.829542 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50cc7d34-44f8-490c-a18c-2d747721d20a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.829703 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/50cc7d34-44f8-490c-a18c-2d747721d20a-ovs-rundir\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.830438 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/50cc7d34-44f8-490c-a18c-2d747721d20a-ovn-rundir\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.830955 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-d9mcq\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.830998 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-config\") pod \"dnsmasq-dns-6bc7876d45-d9mcq\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.831032 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50cc7d34-44f8-490c-a18c-2d747721d20a-config\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.831404 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-d9mcq\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.839153 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cc7d34-44f8-490c-a18c-2d747721d20a-combined-ca-bundle\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.846717 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50cc7d34-44f8-490c-a18c-2d747721d20a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.852430 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btsdx\" (UniqueName: \"kubernetes.io/projected/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-kube-api-access-btsdx\") pod \"dnsmasq-dns-6bc7876d45-d9mcq\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.857726 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gwtb\" (UniqueName: \"kubernetes.io/projected/50cc7d34-44f8-490c-a18c-2d747721d20a-kube-api-access-7gwtb\") pod \"ovn-controller-metrics-lc26n\" (UID: \"50cc7d34-44f8-490c-a18c-2d747721d20a\") " pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.931453 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-957ss"] Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.952977 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lc26n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.967999 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-k965n"] Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.969844 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.970428 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.977223 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-k965n"] Jan 29 08:59:46 crc kubenswrapper[4895]: I0129 08:59:46.985131 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.138109 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.138640 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw9cd\" (UniqueName: \"kubernetes.io/projected/f8d33507-d01b-456f-b8af-8e61c9461ac0-kube-api-access-gw9cd\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.138712 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-config\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.138748 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.138811 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-dns-svc\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.150796 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8msnl" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.190532 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.197874 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8msnl" event={"ID":"bd6868b7-0c63-4cbf-830d-a167983e116d","Type":"ContainerDied","Data":"7176788c68fb148c4380d2f0b8074c28031e4ab03bd91556087183e0db4d1f4c"} Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.197978 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8msnl" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.240088 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.240197 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-dns-svc\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.240261 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.240289 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw9cd\" (UniqueName: \"kubernetes.io/projected/f8d33507-d01b-456f-b8af-8e61c9461ac0-kube-api-access-gw9cd\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.240344 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-config\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.243768 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-config\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.244556 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.245191 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-dns-svc\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.245798 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.290001 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw9cd\" (UniqueName: \"kubernetes.io/projected/f8d33507-d01b-456f-b8af-8e61c9461ac0-kube-api-access-gw9cd\") pod \"dnsmasq-dns-8554648995-k965n\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.322681 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.338939 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.341336 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6868b7-0c63-4cbf-830d-a167983e116d-dns-svc\") pod \"bd6868b7-0c63-4cbf-830d-a167983e116d\" (UID: \"bd6868b7-0c63-4cbf-830d-a167983e116d\") " Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.341542 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6868b7-0c63-4cbf-830d-a167983e116d-config\") pod \"bd6868b7-0c63-4cbf-830d-a167983e116d\" (UID: \"bd6868b7-0c63-4cbf-830d-a167983e116d\") " Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.341599 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsd4s\" (UniqueName: \"kubernetes.io/projected/bd6868b7-0c63-4cbf-830d-a167983e116d-kube-api-access-hsd4s\") pod \"bd6868b7-0c63-4cbf-830d-a167983e116d\" (UID: \"bd6868b7-0c63-4cbf-830d-a167983e116d\") " Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.346125 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd6868b7-0c63-4cbf-830d-a167983e116d-config" (OuterVolumeSpecName: "config") pod "bd6868b7-0c63-4cbf-830d-a167983e116d" (UID: "bd6868b7-0c63-4cbf-830d-a167983e116d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.346949 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd6868b7-0c63-4cbf-830d-a167983e116d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd6868b7-0c63-4cbf-830d-a167983e116d" (UID: "bd6868b7-0c63-4cbf-830d-a167983e116d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.379677 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd6868b7-0c63-4cbf-830d-a167983e116d-kube-api-access-hsd4s" (OuterVolumeSpecName: "kube-api-access-hsd4s") pod "bd6868b7-0c63-4cbf-830d-a167983e116d" (UID: "bd6868b7-0c63-4cbf-830d-a167983e116d"). InnerVolumeSpecName "kube-api-access-hsd4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.444426 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6868b7-0c63-4cbf-830d-a167983e116d-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.444474 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsd4s\" (UniqueName: \"kubernetes.io/projected/bd6868b7-0c63-4cbf-830d-a167983e116d-kube-api-access-hsd4s\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.444488 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6868b7-0c63-4cbf-830d-a167983e116d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.980231 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8msnl"] Jan 29 08:59:47 crc kubenswrapper[4895]: I0129 08:59:47.998319 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8msnl"] Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.023600 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lc26n"] Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.120403 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-d9mcq"] Jan 29 08:59:48 crc kubenswrapper[4895]: W0129 08:59:48.122086 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfe19fd5_6707_4e9e_b0e2_760f4064a6d3.slice/crio-586174b4ba145842eb53c26c97576977b7931593db7b28f4184640ac3932ee3f WatchSource:0}: Error finding container 586174b4ba145842eb53c26c97576977b7931593db7b28f4184640ac3932ee3f: Status 404 returned error can't find the container with id 586174b4ba145842eb53c26c97576977b7931593db7b28f4184640ac3932ee3f Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.189453 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-957ss" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.190251 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.219445 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lc26n" event={"ID":"50cc7d34-44f8-490c-a18c-2d747721d20a","Type":"ContainerStarted","Data":"ce22dbbeb3a625be54ab802c654f92d31bb28ef6b1bcec1df60f363c1455f6fd"} Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.220976 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-957ss" event={"ID":"21a3d792-6aca-4649-8321-8ee399ce37d6","Type":"ContainerDied","Data":"b77a137c63e1083ce7b7d9f49157a8a14cea2c5e9bcc5b3faef864432fa8e906"} Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.221060 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-957ss" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.224207 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" event={"ID":"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3","Type":"ContainerStarted","Data":"586174b4ba145842eb53c26c97576977b7931593db7b28f4184640ac3932ee3f"} Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.243786 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.352580 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21a3d792-6aca-4649-8321-8ee399ce37d6-config\") pod \"21a3d792-6aca-4649-8321-8ee399ce37d6\" (UID: \"21a3d792-6aca-4649-8321-8ee399ce37d6\") " Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.352682 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4mnv\" (UniqueName: \"kubernetes.io/projected/21a3d792-6aca-4649-8321-8ee399ce37d6-kube-api-access-m4mnv\") pod \"21a3d792-6aca-4649-8321-8ee399ce37d6\" (UID: \"21a3d792-6aca-4649-8321-8ee399ce37d6\") " Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.352742 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21a3d792-6aca-4649-8321-8ee399ce37d6-dns-svc\") pod \"21a3d792-6aca-4649-8321-8ee399ce37d6\" (UID: \"21a3d792-6aca-4649-8321-8ee399ce37d6\") " Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.353381 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21a3d792-6aca-4649-8321-8ee399ce37d6-config" (OuterVolumeSpecName: "config") pod "21a3d792-6aca-4649-8321-8ee399ce37d6" (UID: "21a3d792-6aca-4649-8321-8ee399ce37d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.354741 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21a3d792-6aca-4649-8321-8ee399ce37d6-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.355389 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21a3d792-6aca-4649-8321-8ee399ce37d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "21a3d792-6aca-4649-8321-8ee399ce37d6" (UID: "21a3d792-6aca-4649-8321-8ee399ce37d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.359387 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21a3d792-6aca-4649-8321-8ee399ce37d6-kube-api-access-m4mnv" (OuterVolumeSpecName: "kube-api-access-m4mnv") pod "21a3d792-6aca-4649-8321-8ee399ce37d6" (UID: "21a3d792-6aca-4649-8321-8ee399ce37d6"). InnerVolumeSpecName "kube-api-access-m4mnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.451310 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-k965n"] Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.457032 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4mnv\" (UniqueName: \"kubernetes.io/projected/21a3d792-6aca-4649-8321-8ee399ce37d6-kube-api-access-m4mnv\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.457090 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21a3d792-6aca-4649-8321-8ee399ce37d6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.504576 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.506597 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.512057 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.512145 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-l4vqg" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.512184 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.512551 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.528181 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.611552 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-957ss"] Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.621705 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-957ss"] Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.664678 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d524d5b9-7173-4f57-92f5-bf50a940538b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.665293 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d524d5b9-7173-4f57-92f5-bf50a940538b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.665381 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d524d5b9-7173-4f57-92f5-bf50a940538b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.665428 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d524d5b9-7173-4f57-92f5-bf50a940538b-config\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.665618 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d524d5b9-7173-4f57-92f5-bf50a940538b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.665796 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldnmq\" (UniqueName: \"kubernetes.io/projected/d524d5b9-7173-4f57-92f5-bf50a940538b-kube-api-access-ldnmq\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.666164 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d524d5b9-7173-4f57-92f5-bf50a940538b-scripts\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.768259 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d524d5b9-7173-4f57-92f5-bf50a940538b-scripts\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.768453 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d524d5b9-7173-4f57-92f5-bf50a940538b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.768508 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d524d5b9-7173-4f57-92f5-bf50a940538b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.768556 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d524d5b9-7173-4f57-92f5-bf50a940538b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.768590 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d524d5b9-7173-4f57-92f5-bf50a940538b-config\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.768698 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d524d5b9-7173-4f57-92f5-bf50a940538b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.768731 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldnmq\" (UniqueName: \"kubernetes.io/projected/d524d5b9-7173-4f57-92f5-bf50a940538b-kube-api-access-ldnmq\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.769633 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d524d5b9-7173-4f57-92f5-bf50a940538b-scripts\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.769949 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d524d5b9-7173-4f57-92f5-bf50a940538b-config\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.770051 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d524d5b9-7173-4f57-92f5-bf50a940538b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.775130 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d524d5b9-7173-4f57-92f5-bf50a940538b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.776396 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d524d5b9-7173-4f57-92f5-bf50a940538b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.779830 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d524d5b9-7173-4f57-92f5-bf50a940538b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.788645 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldnmq\" (UniqueName: \"kubernetes.io/projected/d524d5b9-7173-4f57-92f5-bf50a940538b-kube-api-access-ldnmq\") pod \"ovn-northd-0\" (UID: \"d524d5b9-7173-4f57-92f5-bf50a940538b\") " pod="openstack/ovn-northd-0" Jan 29 08:59:48 crc kubenswrapper[4895]: I0129 08:59:48.828551 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 08:59:49 crc kubenswrapper[4895]: I0129 08:59:49.221820 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21a3d792-6aca-4649-8321-8ee399ce37d6" path="/var/lib/kubelet/pods/21a3d792-6aca-4649-8321-8ee399ce37d6/volumes" Jan 29 08:59:49 crc kubenswrapper[4895]: I0129 08:59:49.222746 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd6868b7-0c63-4cbf-830d-a167983e116d" path="/var/lib/kubelet/pods/bd6868b7-0c63-4cbf-830d-a167983e116d/volumes" Jan 29 08:59:49 crc kubenswrapper[4895]: I0129 08:59:49.244102 4895 generic.go:334] "Generic (PLEG): container finished" podID="bfe19fd5-6707-4e9e-b0e2-760f4064a6d3" containerID="cfb10ba87875f274cb8faaf3fe52a9c34b82adf0e81eb92c91140767fd0609f8" exitCode=0 Jan 29 08:59:49 crc kubenswrapper[4895]: I0129 08:59:49.244210 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" event={"ID":"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3","Type":"ContainerDied","Data":"cfb10ba87875f274cb8faaf3fe52a9c34b82adf0e81eb92c91140767fd0609f8"} Jan 29 08:59:49 crc kubenswrapper[4895]: I0129 08:59:49.255440 4895 generic.go:334] "Generic (PLEG): container finished" podID="996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94" containerID="4a4030b32273ccf58bddebf098d82ec9052256a4e9bb5478eb7c4e4ee36cb5fe" exitCode=0 Jan 29 08:59:49 crc kubenswrapper[4895]: I0129 08:59:49.255558 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94","Type":"ContainerDied","Data":"4a4030b32273ccf58bddebf098d82ec9052256a4e9bb5478eb7c4e4ee36cb5fe"} Jan 29 08:59:49 crc kubenswrapper[4895]: I0129 08:59:49.258514 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lc26n" event={"ID":"50cc7d34-44f8-490c-a18c-2d747721d20a","Type":"ContainerStarted","Data":"ca9c6fdd2f192f3dde9070c409dc48765a578300e3d221e00639eed8647eab06"} Jan 29 08:59:49 crc kubenswrapper[4895]: I0129 08:59:49.264824 4895 generic.go:334] "Generic (PLEG): container finished" podID="f8d33507-d01b-456f-b8af-8e61c9461ac0" containerID="aed32ac24683f35d67702df3f02f492f5c5d40ba8de88882168c894d1ffef54e" exitCode=0 Jan 29 08:59:49 crc kubenswrapper[4895]: I0129 08:59:49.265116 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-k965n" event={"ID":"f8d33507-d01b-456f-b8af-8e61c9461ac0","Type":"ContainerDied","Data":"aed32ac24683f35d67702df3f02f492f5c5d40ba8de88882168c894d1ffef54e"} Jan 29 08:59:49 crc kubenswrapper[4895]: I0129 08:59:49.265440 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-k965n" event={"ID":"f8d33507-d01b-456f-b8af-8e61c9461ac0","Type":"ContainerStarted","Data":"47cb3f3fd10cec92a350a14f6c1ba1eb5168f49c0308fe9b100a7759d5af82c6"} Jan 29 08:59:49 crc kubenswrapper[4895]: I0129 08:59:49.333387 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 08:59:49 crc kubenswrapper[4895]: I0129 08:59:49.340972 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-lc26n" podStartSLOduration=3.340866636 podStartE2EDuration="3.340866636s" podCreationTimestamp="2026-01-29 08:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:49.332888733 +0000 UTC m=+1130.974396879" watchObservedRunningTime="2026-01-29 08:59:49.340866636 +0000 UTC m=+1130.982374782" Jan 29 08:59:50 crc kubenswrapper[4895]: I0129 08:59:50.277541 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-k965n" event={"ID":"f8d33507-d01b-456f-b8af-8e61c9461ac0","Type":"ContainerStarted","Data":"6250b68ae8289523ca90a0f14fe623da07746d0612c9ce9d3dac65169d7e6d96"} Jan 29 08:59:50 crc kubenswrapper[4895]: I0129 08:59:50.278139 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:50 crc kubenswrapper[4895]: I0129 08:59:50.281597 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d720a04a-6de4-4dd9-b918-471d3d69de73","Type":"ContainerStarted","Data":"84a934d01e59253775d1fdf88536c3cbad9ac0ef46254e5e462eb38cecd12643"} Jan 29 08:59:50 crc kubenswrapper[4895]: I0129 08:59:50.281840 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 08:59:50 crc kubenswrapper[4895]: I0129 08:59:50.284257 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d524d5b9-7173-4f57-92f5-bf50a940538b","Type":"ContainerStarted","Data":"5a86dd0e4cbf561c7eb8015b96ddd9bb4024e6823efe3cc5d6ecb5cc4280a0f7"} Jan 29 08:59:50 crc kubenswrapper[4895]: I0129 08:59:50.289172 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" event={"ID":"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3","Type":"ContainerStarted","Data":"86ab194ce91b25b433bc8b97414e538d9f353e4d069dbf1bab5468443303ac3a"} Jan 29 08:59:50 crc kubenswrapper[4895]: I0129 08:59:50.289333 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:50 crc kubenswrapper[4895]: I0129 08:59:50.291274 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94","Type":"ContainerStarted","Data":"647824fb67a37bb3e65341fb2b302d051378f18951bb33865f3f8e2f44784c2f"} Jan 29 08:59:50 crc kubenswrapper[4895]: I0129 08:59:50.309719 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-k965n" podStartSLOduration=3.911681802 podStartE2EDuration="4.30969761s" podCreationTimestamp="2026-01-29 08:59:46 +0000 UTC" firstStartedPulling="2026-01-29 08:59:48.463155948 +0000 UTC m=+1130.104664094" lastFinishedPulling="2026-01-29 08:59:48.861171756 +0000 UTC m=+1130.502679902" observedRunningTime="2026-01-29 08:59:50.305860497 +0000 UTC m=+1131.947368653" watchObservedRunningTime="2026-01-29 08:59:50.30969761 +0000 UTC m=+1131.951205756" Jan 29 08:59:50 crc kubenswrapper[4895]: I0129 08:59:50.331055 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=11.347168944 podStartE2EDuration="55.33103508s" podCreationTimestamp="2026-01-29 08:58:55 +0000 UTC" firstStartedPulling="2026-01-29 08:58:59.461819165 +0000 UTC m=+1081.103327311" lastFinishedPulling="2026-01-29 08:59:43.445685301 +0000 UTC m=+1125.087193447" observedRunningTime="2026-01-29 08:59:50.326529209 +0000 UTC m=+1131.968037365" watchObservedRunningTime="2026-01-29 08:59:50.33103508 +0000 UTC m=+1131.972543226" Jan 29 08:59:50 crc kubenswrapper[4895]: I0129 08:59:50.358799 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" podStartSLOduration=3.909280418 podStartE2EDuration="4.358772691s" podCreationTimestamp="2026-01-29 08:59:46 +0000 UTC" firstStartedPulling="2026-01-29 08:59:48.125447833 +0000 UTC m=+1129.766955979" lastFinishedPulling="2026-01-29 08:59:48.574940106 +0000 UTC m=+1130.216448252" observedRunningTime="2026-01-29 08:59:50.357684902 +0000 UTC m=+1131.999193058" watchObservedRunningTime="2026-01-29 08:59:50.358772691 +0000 UTC m=+1132.000280827" Jan 29 08:59:50 crc kubenswrapper[4895]: I0129 08:59:50.385835 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.706650694 podStartE2EDuration="52.385799873s" podCreationTimestamp="2026-01-29 08:58:58 +0000 UTC" firstStartedPulling="2026-01-29 08:59:00.063615889 +0000 UTC m=+1081.705124035" lastFinishedPulling="2026-01-29 08:59:49.742765068 +0000 UTC m=+1131.384273214" observedRunningTime="2026-01-29 08:59:50.380561704 +0000 UTC m=+1132.022069860" watchObservedRunningTime="2026-01-29 08:59:50.385799873 +0000 UTC m=+1132.027308019" Jan 29 08:59:51 crc kubenswrapper[4895]: I0129 08:59:51.299510 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5","Type":"ContainerStarted","Data":"7c3e0fe6ef1bed526f92c62b23d0efd1cc2b74bb08f91fe399d1c4d8dcb612a5"} Jan 29 08:59:51 crc kubenswrapper[4895]: I0129 08:59:51.301494 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d524d5b9-7173-4f57-92f5-bf50a940538b","Type":"ContainerStarted","Data":"1bb1e0c9ede46f0c21b23f71ce37035d8f7074a5fe2cc9c1ba3490194aabe06b"} Jan 29 08:59:51 crc kubenswrapper[4895]: I0129 08:59:51.301531 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d524d5b9-7173-4f57-92f5-bf50a940538b","Type":"ContainerStarted","Data":"360ad62805a006f1fa4e34368a8d8088e908a56578cb051d457d0e7f5f67c18c"} Jan 29 08:59:51 crc kubenswrapper[4895]: I0129 08:59:51.301697 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 08:59:51 crc kubenswrapper[4895]: I0129 08:59:51.305861 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"205e527c-d0a7-4b85-9542-19a871c61693","Type":"ContainerStarted","Data":"4687e5ee34e3165a2fc6cf1fed9d40b521ec839b00d24838669ed767a64b97b1"} Jan 29 08:59:51 crc kubenswrapper[4895]: I0129 08:59:51.376387 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.154327528 podStartE2EDuration="3.376361708s" podCreationTimestamp="2026-01-29 08:59:48 +0000 UTC" firstStartedPulling="2026-01-29 08:59:49.385748776 +0000 UTC m=+1131.027256922" lastFinishedPulling="2026-01-29 08:59:50.607782956 +0000 UTC m=+1132.249291102" observedRunningTime="2026-01-29 08:59:51.352934551 +0000 UTC m=+1132.994442707" watchObservedRunningTime="2026-01-29 08:59:51.376361708 +0000 UTC m=+1133.017869854" Jan 29 08:59:52 crc kubenswrapper[4895]: E0129 08:59:52.311706 4895 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.142:34926->38.129.56.142:46589: write tcp 38.129.56.142:34926->38.129.56.142:46589: write: broken pipe Jan 29 08:59:54 crc kubenswrapper[4895]: I0129 08:59:54.332968 4895 generic.go:334] "Generic (PLEG): container finished" podID="205e527c-d0a7-4b85-9542-19a871c61693" containerID="4687e5ee34e3165a2fc6cf1fed9d40b521ec839b00d24838669ed767a64b97b1" exitCode=0 Jan 29 08:59:54 crc kubenswrapper[4895]: I0129 08:59:54.333043 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"205e527c-d0a7-4b85-9542-19a871c61693","Type":"ContainerDied","Data":"4687e5ee34e3165a2fc6cf1fed9d40b521ec839b00d24838669ed767a64b97b1"} Jan 29 08:59:54 crc kubenswrapper[4895]: I0129 08:59:54.337460 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"598e3a01-9620-4320-b00b-ac10baddb593","Type":"ContainerStarted","Data":"be34aa8314b085cbebf2822418324ea77c91ee8ffcb5ecd7cc5e0a3b50371264"} Jan 29 08:59:54 crc kubenswrapper[4895]: I0129 08:59:54.337759 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 08:59:54 crc kubenswrapper[4895]: I0129 08:59:54.387855 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.390771631 podStartE2EDuration="54.387823033s" podCreationTimestamp="2026-01-29 08:59:00 +0000 UTC" firstStartedPulling="2026-01-29 08:59:02.755123483 +0000 UTC m=+1084.396631629" lastFinishedPulling="2026-01-29 08:59:53.752174885 +0000 UTC m=+1135.393683031" observedRunningTime="2026-01-29 08:59:54.383331202 +0000 UTC m=+1136.024839358" watchObservedRunningTime="2026-01-29 08:59:54.387823033 +0000 UTC m=+1136.029331179" Jan 29 08:59:55 crc kubenswrapper[4895]: I0129 08:59:55.346937 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"205e527c-d0a7-4b85-9542-19a871c61693","Type":"ContainerStarted","Data":"ac3a1fe40835d33711d3ebf9d37516dce47e0a4154cc93b7f7b05a362daa1eb0"} Jan 29 08:59:55 crc kubenswrapper[4895]: I0129 08:59:55.373663 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371977.481138 podStartE2EDuration="59.37363762s" podCreationTimestamp="2026-01-29 08:58:56 +0000 UTC" firstStartedPulling="2026-01-29 08:58:59.93381461 +0000 UTC m=+1081.575322756" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:59:55.366654564 +0000 UTC m=+1137.008162720" watchObservedRunningTime="2026-01-29 08:59:55.37363762 +0000 UTC m=+1137.015145756" Jan 29 08:59:56 crc kubenswrapper[4895]: E0129 08:59:56.790296 4895 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.142:34944->38.129.56.142:46589: write tcp 38.129.56.142:34944->38.129.56.142:46589: write: broken pipe Jan 29 08:59:56 crc kubenswrapper[4895]: I0129 08:59:56.972429 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:57 crc kubenswrapper[4895]: I0129 08:59:57.229682 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 08:59:57 crc kubenswrapper[4895]: I0129 08:59:57.230263 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 08:59:57 crc kubenswrapper[4895]: I0129 08:59:57.310790 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 08:59:57 crc kubenswrapper[4895]: I0129 08:59:57.341171 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 08:59:57 crc kubenswrapper[4895]: I0129 08:59:57.424283 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-d9mcq"] Jan 29 08:59:57 crc kubenswrapper[4895]: I0129 08:59:57.424599 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" podUID="bfe19fd5-6707-4e9e-b0e2-760f4064a6d3" containerName="dnsmasq-dns" containerID="cri-o://86ab194ce91b25b433bc8b97414e538d9f353e4d069dbf1bab5468443303ac3a" gracePeriod=10 Jan 29 08:59:57 crc kubenswrapper[4895]: I0129 08:59:57.492777 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 08:59:57 crc kubenswrapper[4895]: I0129 08:59:57.902970 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.035630 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-config\") pod \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.035837 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btsdx\" (UniqueName: \"kubernetes.io/projected/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-kube-api-access-btsdx\") pod \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.035955 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-dns-svc\") pod \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.036029 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-ovsdbserver-sb\") pod \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\" (UID: \"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3\") " Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.076556 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-kube-api-access-btsdx" (OuterVolumeSpecName: "kube-api-access-btsdx") pod "bfe19fd5-6707-4e9e-b0e2-760f4064a6d3" (UID: "bfe19fd5-6707-4e9e-b0e2-760f4064a6d3"). InnerVolumeSpecName "kube-api-access-btsdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.102368 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-config" (OuterVolumeSpecName: "config") pod "bfe19fd5-6707-4e9e-b0e2-760f4064a6d3" (UID: "bfe19fd5-6707-4e9e-b0e2-760f4064a6d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.104803 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bfe19fd5-6707-4e9e-b0e2-760f4064a6d3" (UID: "bfe19fd5-6707-4e9e-b0e2-760f4064a6d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.110687 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bfe19fd5-6707-4e9e-b0e2-760f4064a6d3" (UID: "bfe19fd5-6707-4e9e-b0e2-760f4064a6d3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.158335 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.158398 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btsdx\" (UniqueName: \"kubernetes.io/projected/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-kube-api-access-btsdx\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.158428 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.158440 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.196392 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-b99gm"] Jan 29 08:59:58 crc kubenswrapper[4895]: E0129 08:59:58.196791 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe19fd5-6707-4e9e-b0e2-760f4064a6d3" containerName="dnsmasq-dns" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.196818 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe19fd5-6707-4e9e-b0e2-760f4064a6d3" containerName="dnsmasq-dns" Jan 29 08:59:58 crc kubenswrapper[4895]: E0129 08:59:58.196856 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe19fd5-6707-4e9e-b0e2-760f4064a6d3" containerName="init" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.196865 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe19fd5-6707-4e9e-b0e2-760f4064a6d3" containerName="init" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.197088 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe19fd5-6707-4e9e-b0e2-760f4064a6d3" containerName="dnsmasq-dns" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.197770 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b99gm" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.208723 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-b99gm"] Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.260575 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617fe48f-9b10-427c-aab3-1d2619c7bb09-operator-scripts\") pod \"keystone-db-create-b99gm\" (UID: \"617fe48f-9b10-427c-aab3-1d2619c7bb09\") " pod="openstack/keystone-db-create-b99gm" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.260633 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c92zp\" (UniqueName: \"kubernetes.io/projected/617fe48f-9b10-427c-aab3-1d2619c7bb09-kube-api-access-c92zp\") pod \"keystone-db-create-b99gm\" (UID: \"617fe48f-9b10-427c-aab3-1d2619c7bb09\") " pod="openstack/keystone-db-create-b99gm" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.362419 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617fe48f-9b10-427c-aab3-1d2619c7bb09-operator-scripts\") pod \"keystone-db-create-b99gm\" (UID: \"617fe48f-9b10-427c-aab3-1d2619c7bb09\") " pod="openstack/keystone-db-create-b99gm" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.362485 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c92zp\" (UniqueName: \"kubernetes.io/projected/617fe48f-9b10-427c-aab3-1d2619c7bb09-kube-api-access-c92zp\") pod \"keystone-db-create-b99gm\" (UID: \"617fe48f-9b10-427c-aab3-1d2619c7bb09\") " pod="openstack/keystone-db-create-b99gm" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.363785 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617fe48f-9b10-427c-aab3-1d2619c7bb09-operator-scripts\") pod \"keystone-db-create-b99gm\" (UID: \"617fe48f-9b10-427c-aab3-1d2619c7bb09\") " pod="openstack/keystone-db-create-b99gm" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.372641 4895 generic.go:334] "Generic (PLEG): container finished" podID="bfe19fd5-6707-4e9e-b0e2-760f4064a6d3" containerID="86ab194ce91b25b433bc8b97414e538d9f353e4d069dbf1bab5468443303ac3a" exitCode=0 Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.373633 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.378108 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" event={"ID":"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3","Type":"ContainerDied","Data":"86ab194ce91b25b433bc8b97414e538d9f353e4d069dbf1bab5468443303ac3a"} Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.378203 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-d9mcq" event={"ID":"bfe19fd5-6707-4e9e-b0e2-760f4064a6d3","Type":"ContainerDied","Data":"586174b4ba145842eb53c26c97576977b7931593db7b28f4184640ac3932ee3f"} Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.378231 4895 scope.go:117] "RemoveContainer" containerID="86ab194ce91b25b433bc8b97414e538d9f353e4d069dbf1bab5468443303ac3a" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.388946 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c92zp\" (UniqueName: \"kubernetes.io/projected/617fe48f-9b10-427c-aab3-1d2619c7bb09-kube-api-access-c92zp\") pod \"keystone-db-create-b99gm\" (UID: \"617fe48f-9b10-427c-aab3-1d2619c7bb09\") " pod="openstack/keystone-db-create-b99gm" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.450034 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-d9mcq"] Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.455516 4895 scope.go:117] "RemoveContainer" containerID="cfb10ba87875f274cb8faaf3fe52a9c34b82adf0e81eb92c91140767fd0609f8" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.467083 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.467895 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.474973 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-d9mcq"] Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.516893 4895 scope.go:117] "RemoveContainer" containerID="86ab194ce91b25b433bc8b97414e538d9f353e4d069dbf1bab5468443303ac3a" Jan 29 08:59:58 crc kubenswrapper[4895]: E0129 08:59:58.518628 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86ab194ce91b25b433bc8b97414e538d9f353e4d069dbf1bab5468443303ac3a\": container with ID starting with 86ab194ce91b25b433bc8b97414e538d9f353e4d069dbf1bab5468443303ac3a not found: ID does not exist" containerID="86ab194ce91b25b433bc8b97414e538d9f353e4d069dbf1bab5468443303ac3a" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.518710 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86ab194ce91b25b433bc8b97414e538d9f353e4d069dbf1bab5468443303ac3a"} err="failed to get container status \"86ab194ce91b25b433bc8b97414e538d9f353e4d069dbf1bab5468443303ac3a\": rpc error: code = NotFound desc = could not find container \"86ab194ce91b25b433bc8b97414e538d9f353e4d069dbf1bab5468443303ac3a\": container with ID starting with 86ab194ce91b25b433bc8b97414e538d9f353e4d069dbf1bab5468443303ac3a not found: ID does not exist" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.518754 4895 scope.go:117] "RemoveContainer" containerID="cfb10ba87875f274cb8faaf3fe52a9c34b82adf0e81eb92c91140767fd0609f8" Jan 29 08:59:58 crc kubenswrapper[4895]: E0129 08:59:58.519244 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfb10ba87875f274cb8faaf3fe52a9c34b82adf0e81eb92c91140767fd0609f8\": container with ID starting with cfb10ba87875f274cb8faaf3fe52a9c34b82adf0e81eb92c91140767fd0609f8 not found: ID does not exist" containerID="cfb10ba87875f274cb8faaf3fe52a9c34b82adf0e81eb92c91140767fd0609f8" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.519296 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfb10ba87875f274cb8faaf3fe52a9c34b82adf0e81eb92c91140767fd0609f8"} err="failed to get container status \"cfb10ba87875f274cb8faaf3fe52a9c34b82adf0e81eb92c91140767fd0609f8\": rpc error: code = NotFound desc = could not find container \"cfb10ba87875f274cb8faaf3fe52a9c34b82adf0e81eb92c91140767fd0609f8\": container with ID starting with cfb10ba87875f274cb8faaf3fe52a9c34b82adf0e81eb92c91140767fd0609f8 not found: ID does not exist" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.520145 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b99gm" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.624345 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-fzmwt"] Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.625951 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fzmwt" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.633476 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-fzmwt"] Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.640339 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f408-account-create-update-9lsgg"] Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.641490 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f408-account-create-update-9lsgg" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.647660 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.659713 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f408-account-create-update-9lsgg"] Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.721325 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5b5f-account-create-update-8gmtx"] Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.725505 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b5f-account-create-update-8gmtx" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.728156 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.741817 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b5f-account-create-update-8gmtx"] Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.756232 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.787533 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c111f64-ae01-431e-a625-7b2131a90998-operator-scripts\") pod \"keystone-f408-account-create-update-9lsgg\" (UID: \"1c111f64-ae01-431e-a625-7b2131a90998\") " pod="openstack/keystone-f408-account-create-update-9lsgg" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.787652 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2h6l\" (UniqueName: \"kubernetes.io/projected/1c111f64-ae01-431e-a625-7b2131a90998-kube-api-access-g2h6l\") pod \"keystone-f408-account-create-update-9lsgg\" (UID: \"1c111f64-ae01-431e-a625-7b2131a90998\") " pod="openstack/keystone-f408-account-create-update-9lsgg" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.787689 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d38c12b-6fa7-411f-91bf-a0d0a6ed733c-operator-scripts\") pod \"placement-db-create-fzmwt\" (UID: \"9d38c12b-6fa7-411f-91bf-a0d0a6ed733c\") " pod="openstack/placement-db-create-fzmwt" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.787743 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzz84\" (UniqueName: \"kubernetes.io/projected/9d38c12b-6fa7-411f-91bf-a0d0a6ed733c-kube-api-access-jzz84\") pod \"placement-db-create-fzmwt\" (UID: \"9d38c12b-6fa7-411f-91bf-a0d0a6ed733c\") " pod="openstack/placement-db-create-fzmwt" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.889274 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c111f64-ae01-431e-a625-7b2131a90998-operator-scripts\") pod \"keystone-f408-account-create-update-9lsgg\" (UID: \"1c111f64-ae01-431e-a625-7b2131a90998\") " pod="openstack/keystone-f408-account-create-update-9lsgg" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.889396 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2h6l\" (UniqueName: \"kubernetes.io/projected/1c111f64-ae01-431e-a625-7b2131a90998-kube-api-access-g2h6l\") pod \"keystone-f408-account-create-update-9lsgg\" (UID: \"1c111f64-ae01-431e-a625-7b2131a90998\") " pod="openstack/keystone-f408-account-create-update-9lsgg" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.889418 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d38c12b-6fa7-411f-91bf-a0d0a6ed733c-operator-scripts\") pod \"placement-db-create-fzmwt\" (UID: \"9d38c12b-6fa7-411f-91bf-a0d0a6ed733c\") " pod="openstack/placement-db-create-fzmwt" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.889467 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzz84\" (UniqueName: \"kubernetes.io/projected/9d38c12b-6fa7-411f-91bf-a0d0a6ed733c-kube-api-access-jzz84\") pod \"placement-db-create-fzmwt\" (UID: \"9d38c12b-6fa7-411f-91bf-a0d0a6ed733c\") " pod="openstack/placement-db-create-fzmwt" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.889502 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42d29420-dc1a-4983-b157-59364db98935-operator-scripts\") pod \"placement-5b5f-account-create-update-8gmtx\" (UID: \"42d29420-dc1a-4983-b157-59364db98935\") " pod="openstack/placement-5b5f-account-create-update-8gmtx" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.889549 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc4bk\" (UniqueName: \"kubernetes.io/projected/42d29420-dc1a-4983-b157-59364db98935-kube-api-access-rc4bk\") pod \"placement-5b5f-account-create-update-8gmtx\" (UID: \"42d29420-dc1a-4983-b157-59364db98935\") " pod="openstack/placement-5b5f-account-create-update-8gmtx" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.890962 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d38c12b-6fa7-411f-91bf-a0d0a6ed733c-operator-scripts\") pod \"placement-db-create-fzmwt\" (UID: \"9d38c12b-6fa7-411f-91bf-a0d0a6ed733c\") " pod="openstack/placement-db-create-fzmwt" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.891521 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c111f64-ae01-431e-a625-7b2131a90998-operator-scripts\") pod \"keystone-f408-account-create-update-9lsgg\" (UID: \"1c111f64-ae01-431e-a625-7b2131a90998\") " pod="openstack/keystone-f408-account-create-update-9lsgg" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.912094 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4r786"] Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.914003 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzz84\" (UniqueName: \"kubernetes.io/projected/9d38c12b-6fa7-411f-91bf-a0d0a6ed733c-kube-api-access-jzz84\") pod \"placement-db-create-fzmwt\" (UID: \"9d38c12b-6fa7-411f-91bf-a0d0a6ed733c\") " pod="openstack/placement-db-create-fzmwt" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.916349 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2h6l\" (UniqueName: \"kubernetes.io/projected/1c111f64-ae01-431e-a625-7b2131a90998-kube-api-access-g2h6l\") pod \"keystone-f408-account-create-update-9lsgg\" (UID: \"1c111f64-ae01-431e-a625-7b2131a90998\") " pod="openstack/keystone-f408-account-create-update-9lsgg" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.926699 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4r786"] Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.927235 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4r786" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.972041 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fzmwt" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.993069 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42d29420-dc1a-4983-b157-59364db98935-operator-scripts\") pod \"placement-5b5f-account-create-update-8gmtx\" (UID: \"42d29420-dc1a-4983-b157-59364db98935\") " pod="openstack/placement-5b5f-account-create-update-8gmtx" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.993164 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc4bk\" (UniqueName: \"kubernetes.io/projected/42d29420-dc1a-4983-b157-59364db98935-kube-api-access-rc4bk\") pod \"placement-5b5f-account-create-update-8gmtx\" (UID: \"42d29420-dc1a-4983-b157-59364db98935\") " pod="openstack/placement-5b5f-account-create-update-8gmtx" Jan 29 08:59:58 crc kubenswrapper[4895]: I0129 08:59:58.993946 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42d29420-dc1a-4983-b157-59364db98935-operator-scripts\") pod \"placement-5b5f-account-create-update-8gmtx\" (UID: \"42d29420-dc1a-4983-b157-59364db98935\") " pod="openstack/placement-5b5f-account-create-update-8gmtx" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.003155 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f408-account-create-update-9lsgg" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.018507 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-eb0f-account-create-update-8hlmq"] Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.020843 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-eb0f-account-create-update-8hlmq" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.024527 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.026771 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc4bk\" (UniqueName: \"kubernetes.io/projected/42d29420-dc1a-4983-b157-59364db98935-kube-api-access-rc4bk\") pod \"placement-5b5f-account-create-update-8gmtx\" (UID: \"42d29420-dc1a-4983-b157-59364db98935\") " pod="openstack/placement-5b5f-account-create-update-8gmtx" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.032881 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-eb0f-account-create-update-8hlmq"] Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.064261 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b5f-account-create-update-8gmtx" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.147362 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508f7a95-0307-4a55-8e27-29e44a6823ef-operator-scripts\") pod \"glance-eb0f-account-create-update-8hlmq\" (UID: \"508f7a95-0307-4a55-8e27-29e44a6823ef\") " pod="openstack/glance-eb0f-account-create-update-8hlmq" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.147483 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk8kb\" (UniqueName: \"kubernetes.io/projected/ad71f34b-a6b5-4947-9dab-83d1655a8d7a-kube-api-access-lk8kb\") pod \"glance-db-create-4r786\" (UID: \"ad71f34b-a6b5-4947-9dab-83d1655a8d7a\") " pod="openstack/glance-db-create-4r786" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.147518 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad71f34b-a6b5-4947-9dab-83d1655a8d7a-operator-scripts\") pod \"glance-db-create-4r786\" (UID: \"ad71f34b-a6b5-4947-9dab-83d1655a8d7a\") " pod="openstack/glance-db-create-4r786" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.147552 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n6br\" (UniqueName: \"kubernetes.io/projected/508f7a95-0307-4a55-8e27-29e44a6823ef-kube-api-access-2n6br\") pod \"glance-eb0f-account-create-update-8hlmq\" (UID: \"508f7a95-0307-4a55-8e27-29e44a6823ef\") " pod="openstack/glance-eb0f-account-create-update-8hlmq" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.153747 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-b99gm"] Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.276423 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508f7a95-0307-4a55-8e27-29e44a6823ef-operator-scripts\") pod \"glance-eb0f-account-create-update-8hlmq\" (UID: \"508f7a95-0307-4a55-8e27-29e44a6823ef\") " pod="openstack/glance-eb0f-account-create-update-8hlmq" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.276543 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk8kb\" (UniqueName: \"kubernetes.io/projected/ad71f34b-a6b5-4947-9dab-83d1655a8d7a-kube-api-access-lk8kb\") pod \"glance-db-create-4r786\" (UID: \"ad71f34b-a6b5-4947-9dab-83d1655a8d7a\") " pod="openstack/glance-db-create-4r786" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.276604 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad71f34b-a6b5-4947-9dab-83d1655a8d7a-operator-scripts\") pod \"glance-db-create-4r786\" (UID: \"ad71f34b-a6b5-4947-9dab-83d1655a8d7a\") " pod="openstack/glance-db-create-4r786" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.277442 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508f7a95-0307-4a55-8e27-29e44a6823ef-operator-scripts\") pod \"glance-eb0f-account-create-update-8hlmq\" (UID: \"508f7a95-0307-4a55-8e27-29e44a6823ef\") " pod="openstack/glance-eb0f-account-create-update-8hlmq" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.277501 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad71f34b-a6b5-4947-9dab-83d1655a8d7a-operator-scripts\") pod \"glance-db-create-4r786\" (UID: \"ad71f34b-a6b5-4947-9dab-83d1655a8d7a\") " pod="openstack/glance-db-create-4r786" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.277557 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n6br\" (UniqueName: \"kubernetes.io/projected/508f7a95-0307-4a55-8e27-29e44a6823ef-kube-api-access-2n6br\") pod \"glance-eb0f-account-create-update-8hlmq\" (UID: \"508f7a95-0307-4a55-8e27-29e44a6823ef\") " pod="openstack/glance-eb0f-account-create-update-8hlmq" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.286433 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfe19fd5-6707-4e9e-b0e2-760f4064a6d3" path="/var/lib/kubelet/pods/bfe19fd5-6707-4e9e-b0e2-760f4064a6d3/volumes" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.321558 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n6br\" (UniqueName: \"kubernetes.io/projected/508f7a95-0307-4a55-8e27-29e44a6823ef-kube-api-access-2n6br\") pod \"glance-eb0f-account-create-update-8hlmq\" (UID: \"508f7a95-0307-4a55-8e27-29e44a6823ef\") " pod="openstack/glance-eb0f-account-create-update-8hlmq" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.325299 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk8kb\" (UniqueName: \"kubernetes.io/projected/ad71f34b-a6b5-4947-9dab-83d1655a8d7a-kube-api-access-lk8kb\") pod \"glance-db-create-4r786\" (UID: \"ad71f34b-a6b5-4947-9dab-83d1655a8d7a\") " pod="openstack/glance-db-create-4r786" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.325766 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4r786" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.364940 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-eb0f-account-create-update-8hlmq" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.411590 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-b99gm" event={"ID":"617fe48f-9b10-427c-aab3-1d2619c7bb09","Type":"ContainerStarted","Data":"1cf6096a4bf763ae16e679a7fdb880271eba3e207a618534fbfb92a294d9a77d"} Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.499312 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.885844 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f408-account-create-update-9lsgg"] Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.898133 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 08:59:59 crc kubenswrapper[4895]: I0129 08:59:59.905709 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-fzmwt"] Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.032462 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b5f-account-create-update-8gmtx"] Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.057428 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.100503 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4r786"] Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.115185 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-eb0f-account-create-update-8hlmq"] Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.178798 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz"] Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.180226 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.180821 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.183224 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.186590 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.195689 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz"] Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.314786 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7da1d197-6753-4aad-b1a4-55cfb0a2f742-secret-volume\") pod \"collect-profiles-29494620-jz4mz\" (UID: \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.315027 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mswmq\" (UniqueName: \"kubernetes.io/projected/7da1d197-6753-4aad-b1a4-55cfb0a2f742-kube-api-access-mswmq\") pod \"collect-profiles-29494620-jz4mz\" (UID: \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.315084 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7da1d197-6753-4aad-b1a4-55cfb0a2f742-config-volume\") pod \"collect-profiles-29494620-jz4mz\" (UID: \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.418258 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7da1d197-6753-4aad-b1a4-55cfb0a2f742-config-volume\") pod \"collect-profiles-29494620-jz4mz\" (UID: \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.419512 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f408-account-create-update-9lsgg" event={"ID":"1c111f64-ae01-431e-a625-7b2131a90998","Type":"ContainerStarted","Data":"3e9b1ad38a047ab1774a04c602c4e1951d5c7a7294439849dff7e1c00538a5cf"} Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.419403 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7da1d197-6753-4aad-b1a4-55cfb0a2f742-config-volume\") pod \"collect-profiles-29494620-jz4mz\" (UID: \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.419599 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7da1d197-6753-4aad-b1a4-55cfb0a2f742-secret-volume\") pod \"collect-profiles-29494620-jz4mz\" (UID: \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.419826 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mswmq\" (UniqueName: \"kubernetes.io/projected/7da1d197-6753-4aad-b1a4-55cfb0a2f742-kube-api-access-mswmq\") pod \"collect-profiles-29494620-jz4mz\" (UID: \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.423657 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5f-account-create-update-8gmtx" event={"ID":"42d29420-dc1a-4983-b157-59364db98935","Type":"ContainerStarted","Data":"55ffddfede17ef094fbad059189ce3163196cd58a47ced9de00f7f7196f66ca8"} Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.425356 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4r786" event={"ID":"ad71f34b-a6b5-4947-9dab-83d1655a8d7a","Type":"ContainerStarted","Data":"6c40612421612c14d558e35072270f629a4a9004a04f080f6a96b68d57c6d715"} Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.427188 4895 generic.go:334] "Generic (PLEG): container finished" podID="617fe48f-9b10-427c-aab3-1d2619c7bb09" containerID="10fb10f56bb393b8f9d11f0fe884375099897712719cf571e0194e4ff3d78552" exitCode=0 Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.427262 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-b99gm" event={"ID":"617fe48f-9b10-427c-aab3-1d2619c7bb09","Type":"ContainerDied","Data":"10fb10f56bb393b8f9d11f0fe884375099897712719cf571e0194e4ff3d78552"} Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.427722 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7da1d197-6753-4aad-b1a4-55cfb0a2f742-secret-volume\") pod \"collect-profiles-29494620-jz4mz\" (UID: \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.428694 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-eb0f-account-create-update-8hlmq" event={"ID":"508f7a95-0307-4a55-8e27-29e44a6823ef","Type":"ContainerStarted","Data":"a3ea295a369605593b518b4fd858f1e97bcd3f61f70d4f7b103ac806159525b3"} Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.433954 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fzmwt" event={"ID":"9d38c12b-6fa7-411f-91bf-a0d0a6ed733c","Type":"ContainerStarted","Data":"0bde742f0073e2ade4117f3c788c56d2d1bbb2a405d06f3c2c5fc0602d6ae784"} Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.438929 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mswmq\" (UniqueName: \"kubernetes.io/projected/7da1d197-6753-4aad-b1a4-55cfb0a2f742-kube-api-access-mswmq\") pod \"collect-profiles-29494620-jz4mz\" (UID: \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.510950 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 09:00:00 crc kubenswrapper[4895]: I0129 09:00:00.540504 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.138232 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.335805 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g84tq"] Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.337748 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.358538 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g84tq"] Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.506128 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.506203 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-config\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.506262 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4s77\" (UniqueName: \"kubernetes.io/projected/a9a8b6ea-9175-4086-98ad-a7b35af3798d-kube-api-access-j4s77\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.506303 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.506399 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.580981 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz"] Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.607964 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.608599 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.608668 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-config\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.608736 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4s77\" (UniqueName: \"kubernetes.io/projected/a9a8b6ea-9175-4086-98ad-a7b35af3798d-kube-api-access-j4s77\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.608786 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.611080 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.612400 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-config\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.612425 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.613296 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: W0129 09:00:01.660233 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7da1d197_6753_4aad_b1a4_55cfb0a2f742.slice/crio-b73abe3f73c11a4345c2f3ff0cfce8eec2a882ab6e3ab679ccde4aff1ecd8ce9 WatchSource:0}: Error finding container b73abe3f73c11a4345c2f3ff0cfce8eec2a882ab6e3ab679ccde4aff1ecd8ce9: Status 404 returned error can't find the container with id b73abe3f73c11a4345c2f3ff0cfce8eec2a882ab6e3ab679ccde4aff1ecd8ce9 Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.713722 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4s77\" (UniqueName: \"kubernetes.io/projected/a9a8b6ea-9175-4086-98ad-a7b35af3798d-kube-api-access-j4s77\") pod \"dnsmasq-dns-b8fbc5445-g84tq\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:01 crc kubenswrapper[4895]: I0129 09:00:01.978371 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.343664 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b99gm" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.458735 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617fe48f-9b10-427c-aab3-1d2619c7bb09-operator-scripts\") pod \"617fe48f-9b10-427c-aab3-1d2619c7bb09\" (UID: \"617fe48f-9b10-427c-aab3-1d2619c7bb09\") " Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.459434 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c92zp\" (UniqueName: \"kubernetes.io/projected/617fe48f-9b10-427c-aab3-1d2619c7bb09-kube-api-access-c92zp\") pod \"617fe48f-9b10-427c-aab3-1d2619c7bb09\" (UID: \"617fe48f-9b10-427c-aab3-1d2619c7bb09\") " Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.461714 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/617fe48f-9b10-427c-aab3-1d2619c7bb09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "617fe48f-9b10-427c-aab3-1d2619c7bb09" (UID: "617fe48f-9b10-427c-aab3-1d2619c7bb09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.473289 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617fe48f-9b10-427c-aab3-1d2619c7bb09-kube-api-access-c92zp" (OuterVolumeSpecName: "kube-api-access-c92zp") pod "617fe48f-9b10-427c-aab3-1d2619c7bb09" (UID: "617fe48f-9b10-427c-aab3-1d2619c7bb09"). InnerVolumeSpecName "kube-api-access-c92zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.473463 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 29 09:00:02 crc kubenswrapper[4895]: E0129 09:00:02.473930 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617fe48f-9b10-427c-aab3-1d2619c7bb09" containerName="mariadb-database-create" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.473953 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="617fe48f-9b10-427c-aab3-1d2619c7bb09" containerName="mariadb-database-create" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.474148 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="617fe48f-9b10-427c-aab3-1d2619c7bb09" containerName="mariadb-database-create" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.486752 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.489620 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f408-account-create-update-9lsgg" event={"ID":"1c111f64-ae01-431e-a625-7b2131a90998","Type":"ContainerStarted","Data":"0b2d0147c3e633071187bf910a0446568146f2be77738176d1da4326554272fd"} Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.490307 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-sgn7d" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.497665 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.497910 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.498803 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.501163 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-eb0f-account-create-update-8hlmq" event={"ID":"508f7a95-0307-4a55-8e27-29e44a6823ef","Type":"ContainerStarted","Data":"2c4d79873a0bf753d61b4cb09a0f82ba5b2b3d31f8e533474572322c6bdc1e25"} Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.507471 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5f-account-create-update-8gmtx" event={"ID":"42d29420-dc1a-4983-b157-59364db98935","Type":"ContainerStarted","Data":"5caf7ff5a638d9ebbcc04049d656a4ef8bd381aa7d33d7d5b4a4fcaa8a524084"} Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.517967 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.518824 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fzmwt" event={"ID":"9d38c12b-6fa7-411f-91bf-a0d0a6ed733c","Type":"ContainerStarted","Data":"bdfc2af1ace031accce5628b7e570e5199e5cfcbb8e3b18cb3ff0908c801fd84"} Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.536156 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" event={"ID":"7da1d197-6753-4aad-b1a4-55cfb0a2f742","Type":"ContainerStarted","Data":"362f944e570cfa6af434a286fa1d7d1d3e8d5d3acfed304a08ba4ff3c537dda1"} Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.536222 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" event={"ID":"7da1d197-6753-4aad-b1a4-55cfb0a2f742","Type":"ContainerStarted","Data":"b73abe3f73c11a4345c2f3ff0cfce8eec2a882ab6e3ab679ccde4aff1ecd8ce9"} Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.543396 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4r786" event={"ID":"ad71f34b-a6b5-4947-9dab-83d1655a8d7a","Type":"ContainerStarted","Data":"077d409bc60227d4a6ca64441f83a2f2df9558d9b64fc524a6b74ab4808e4dc7"} Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.561036 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-b99gm" event={"ID":"617fe48f-9b10-427c-aab3-1d2619c7bb09","Type":"ContainerDied","Data":"1cf6096a4bf763ae16e679a7fdb880271eba3e207a618534fbfb92a294d9a77d"} Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.561092 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cf6096a4bf763ae16e679a7fdb880271eba3e207a618534fbfb92a294d9a77d" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.561504 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b99gm" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.562846 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/db0d35a0-7174-452f-bd71-2dae8f7dff11-cache\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.562908 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2srp2\" (UniqueName: \"kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-kube-api-access-2srp2\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.563017 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0d35a0-7174-452f-bd71-2dae8f7dff11-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.563147 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.563199 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/db0d35a0-7174-452f-bd71-2dae8f7dff11-lock\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.563254 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.563720 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c92zp\" (UniqueName: \"kubernetes.io/projected/617fe48f-9b10-427c-aab3-1d2619c7bb09-kube-api-access-c92zp\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.563743 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617fe48f-9b10-427c-aab3-1d2619c7bb09-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.593848 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-fzmwt" podStartSLOduration=4.593823658 podStartE2EDuration="4.593823658s" podCreationTimestamp="2026-01-29 08:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:02.569414286 +0000 UTC m=+1144.210922432" watchObservedRunningTime="2026-01-29 09:00:02.593823658 +0000 UTC m=+1144.235331804" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.602727 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-eb0f-account-create-update-8hlmq" podStartSLOduration=4.602684646 podStartE2EDuration="4.602684646s" podCreationTimestamp="2026-01-29 08:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:02.591988689 +0000 UTC m=+1144.233496855" watchObservedRunningTime="2026-01-29 09:00:02.602684646 +0000 UTC m=+1144.244192792" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.624033 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5b5f-account-create-update-8gmtx" podStartSLOduration=4.624003085 podStartE2EDuration="4.624003085s" podCreationTimestamp="2026-01-29 08:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:02.617568763 +0000 UTC m=+1144.259076919" watchObservedRunningTime="2026-01-29 09:00:02.624003085 +0000 UTC m=+1144.265511231" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.647162 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-drdk8"] Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.648387 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.651798 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.651901 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.651857 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.667999 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/db0d35a0-7174-452f-bd71-2dae8f7dff11-cache\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.668064 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2srp2\" (UniqueName: \"kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-kube-api-access-2srp2\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.668111 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0d35a0-7174-452f-bd71-2dae8f7dff11-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.668171 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.668663 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/db0d35a0-7174-452f-bd71-2dae8f7dff11-lock\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.668748 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.669373 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.669528 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/db0d35a0-7174-452f-bd71-2dae8f7dff11-lock\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.669963 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/db0d35a0-7174-452f-bd71-2dae8f7dff11-cache\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: E0129 09:00:02.670143 4895 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 09:00:02 crc kubenswrapper[4895]: E0129 09:00:02.670172 4895 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 09:00:02 crc kubenswrapper[4895]: E0129 09:00:02.670262 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift podName:db0d35a0-7174-452f-bd71-2dae8f7dff11 nodeName:}" failed. No retries permitted until 2026-01-29 09:00:03.170240121 +0000 UTC m=+1144.811748487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift") pod "swift-storage-0" (UID: "db0d35a0-7174-452f-bd71-2dae8f7dff11") : configmap "swift-ring-files" not found Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.677976 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0d35a0-7174-452f-bd71-2dae8f7dff11-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.682858 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-drdk8"] Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.689452 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-f408-account-create-update-9lsgg" podStartSLOduration=4.689413614 podStartE2EDuration="4.689413614s" podCreationTimestamp="2026-01-29 08:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:02.665304838 +0000 UTC m=+1144.306812984" watchObservedRunningTime="2026-01-29 09:00:02.689413614 +0000 UTC m=+1144.330921760" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.694104 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2srp2\" (UniqueName: \"kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-kube-api-access-2srp2\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.712434 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-4r786" podStartSLOduration=4.712404948 podStartE2EDuration="4.712404948s" podCreationTimestamp="2026-01-29 08:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:02.702226976 +0000 UTC m=+1144.343735132" watchObservedRunningTime="2026-01-29 09:00:02.712404948 +0000 UTC m=+1144.353913094" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.715417 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.771451 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/073f4b22-319f-4cbb-ac96-c0a18da477a6-ring-data-devices\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.771706 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-swiftconf\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.772081 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-dispersionconf\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.772280 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjdt4\" (UniqueName: \"kubernetes.io/projected/073f4b22-319f-4cbb-ac96-c0a18da477a6-kube-api-access-rjdt4\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.772463 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-combined-ca-bundle\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.772594 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/073f4b22-319f-4cbb-ac96-c0a18da477a6-scripts\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.773269 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/073f4b22-319f-4cbb-ac96-c0a18da477a6-etc-swift\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.773433 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g84tq"] Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.793073 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" podStartSLOduration=2.793043333 podStartE2EDuration="2.793043333s" podCreationTimestamp="2026-01-29 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:02.782676696 +0000 UTC m=+1144.424184842" watchObservedRunningTime="2026-01-29 09:00:02.793043333 +0000 UTC m=+1144.434551479" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.874634 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/073f4b22-319f-4cbb-ac96-c0a18da477a6-etc-swift\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.874716 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/073f4b22-319f-4cbb-ac96-c0a18da477a6-ring-data-devices\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.874755 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-swiftconf\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.874778 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-dispersionconf\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.874846 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjdt4\" (UniqueName: \"kubernetes.io/projected/073f4b22-319f-4cbb-ac96-c0a18da477a6-kube-api-access-rjdt4\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.874905 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-combined-ca-bundle\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.875032 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/073f4b22-319f-4cbb-ac96-c0a18da477a6-scripts\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.875441 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/073f4b22-319f-4cbb-ac96-c0a18da477a6-etc-swift\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.875773 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/073f4b22-319f-4cbb-ac96-c0a18da477a6-ring-data-devices\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.875855 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/073f4b22-319f-4cbb-ac96-c0a18da477a6-scripts\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.881126 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-dispersionconf\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.881571 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-combined-ca-bundle\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.883435 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-swiftconf\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:02 crc kubenswrapper[4895]: I0129 09:00:02.894352 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjdt4\" (UniqueName: \"kubernetes.io/projected/073f4b22-319f-4cbb-ac96-c0a18da477a6-kube-api-access-rjdt4\") pod \"swift-ring-rebalance-drdk8\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.174563 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.181290 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:03 crc kubenswrapper[4895]: E0129 09:00:03.181502 4895 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 09:00:03 crc kubenswrapper[4895]: E0129 09:00:03.181531 4895 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 09:00:03 crc kubenswrapper[4895]: E0129 09:00:03.181589 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift podName:db0d35a0-7174-452f-bd71-2dae8f7dff11 nodeName:}" failed. No retries permitted until 2026-01-29 09:00:04.181571037 +0000 UTC m=+1145.823079183 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift") pod "swift-storage-0" (UID: "db0d35a0-7174-452f-bd71-2dae8f7dff11") : configmap "swift-ring-files" not found Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.579527 4895 generic.go:334] "Generic (PLEG): container finished" podID="508f7a95-0307-4a55-8e27-29e44a6823ef" containerID="2c4d79873a0bf753d61b4cb09a0f82ba5b2b3d31f8e533474572322c6bdc1e25" exitCode=0 Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.579656 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-eb0f-account-create-update-8hlmq" event={"ID":"508f7a95-0307-4a55-8e27-29e44a6823ef","Type":"ContainerDied","Data":"2c4d79873a0bf753d61b4cb09a0f82ba5b2b3d31f8e533474572322c6bdc1e25"} Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.582442 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5f-account-create-update-8gmtx" event={"ID":"42d29420-dc1a-4983-b157-59364db98935","Type":"ContainerDied","Data":"5caf7ff5a638d9ebbcc04049d656a4ef8bd381aa7d33d7d5b4a4fcaa8a524084"} Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.582346 4895 generic.go:334] "Generic (PLEG): container finished" podID="42d29420-dc1a-4983-b157-59364db98935" containerID="5caf7ff5a638d9ebbcc04049d656a4ef8bd381aa7d33d7d5b4a4fcaa8a524084" exitCode=0 Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.586858 4895 generic.go:334] "Generic (PLEG): container finished" podID="9d38c12b-6fa7-411f-91bf-a0d0a6ed733c" containerID="bdfc2af1ace031accce5628b7e570e5199e5cfcbb8e3b18cb3ff0908c801fd84" exitCode=0 Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.586948 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fzmwt" event={"ID":"9d38c12b-6fa7-411f-91bf-a0d0a6ed733c","Type":"ContainerDied","Data":"bdfc2af1ace031accce5628b7e570e5199e5cfcbb8e3b18cb3ff0908c801fd84"} Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.589023 4895 generic.go:334] "Generic (PLEG): container finished" podID="7da1d197-6753-4aad-b1a4-55cfb0a2f742" containerID="362f944e570cfa6af434a286fa1d7d1d3e8d5d3acfed304a08ba4ff3c537dda1" exitCode=0 Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.589151 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" event={"ID":"7da1d197-6753-4aad-b1a4-55cfb0a2f742","Type":"ContainerDied","Data":"362f944e570cfa6af434a286fa1d7d1d3e8d5d3acfed304a08ba4ff3c537dda1"} Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.590874 4895 generic.go:334] "Generic (PLEG): container finished" podID="ad71f34b-a6b5-4947-9dab-83d1655a8d7a" containerID="077d409bc60227d4a6ca64441f83a2f2df9558d9b64fc524a6b74ab4808e4dc7" exitCode=0 Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.590951 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4r786" event={"ID":"ad71f34b-a6b5-4947-9dab-83d1655a8d7a","Type":"ContainerDied","Data":"077d409bc60227d4a6ca64441f83a2f2df9558d9b64fc524a6b74ab4808e4dc7"} Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.592615 4895 generic.go:334] "Generic (PLEG): container finished" podID="1c111f64-ae01-431e-a625-7b2131a90998" containerID="0b2d0147c3e633071187bf910a0446568146f2be77738176d1da4326554272fd" exitCode=0 Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.592659 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f408-account-create-update-9lsgg" event={"ID":"1c111f64-ae01-431e-a625-7b2131a90998","Type":"ContainerDied","Data":"0b2d0147c3e633071187bf910a0446568146f2be77738176d1da4326554272fd"} Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.594006 4895 generic.go:334] "Generic (PLEG): container finished" podID="a9a8b6ea-9175-4086-98ad-a7b35af3798d" containerID="57544b7ab707073235f4ee815977b2bce505383b1e3f0888b3ef28b8b7ed9444" exitCode=0 Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.594035 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" event={"ID":"a9a8b6ea-9175-4086-98ad-a7b35af3798d","Type":"ContainerDied","Data":"57544b7ab707073235f4ee815977b2bce505383b1e3f0888b3ef28b8b7ed9444"} Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.594049 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" event={"ID":"a9a8b6ea-9175-4086-98ad-a7b35af3798d","Type":"ContainerStarted","Data":"36e92f0bc576980a30bd2bee23cc12c3294012b3df43607591269751838c27fd"} Jan 29 09:00:03 crc kubenswrapper[4895]: W0129 09:00:03.708445 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod073f4b22_319f_4cbb_ac96_c0a18da477a6.slice/crio-7e2d780e1348ea4b1b35358e1a1d3e666cb5ac5988ef46e0a624a93ed019c7cd WatchSource:0}: Error finding container 7e2d780e1348ea4b1b35358e1a1d3e666cb5ac5988ef46e0a624a93ed019c7cd: Status 404 returned error can't find the container with id 7e2d780e1348ea4b1b35358e1a1d3e666cb5ac5988ef46e0a624a93ed019c7cd Jan 29 09:00:03 crc kubenswrapper[4895]: I0129 09:00:03.762521 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-drdk8"] Jan 29 09:00:04 crc kubenswrapper[4895]: I0129 09:00:04.356412 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:04 crc kubenswrapper[4895]: E0129 09:00:04.357194 4895 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 09:00:04 crc kubenswrapper[4895]: E0129 09:00:04.357311 4895 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 09:00:04 crc kubenswrapper[4895]: E0129 09:00:04.357453 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift podName:db0d35a0-7174-452f-bd71-2dae8f7dff11 nodeName:}" failed. No retries permitted until 2026-01-29 09:00:06.357429363 +0000 UTC m=+1147.998937509 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift") pod "swift-storage-0" (UID: "db0d35a0-7174-452f-bd71-2dae8f7dff11") : configmap "swift-ring-files" not found Jan 29 09:00:04 crc kubenswrapper[4895]: I0129 09:00:04.607250 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" event={"ID":"a9a8b6ea-9175-4086-98ad-a7b35af3798d","Type":"ContainerStarted","Data":"dc85c6f26ebe5257c5eed4f46629e4041f4e9d089296e541d79b90dd36eea35a"} Jan 29 09:00:04 crc kubenswrapper[4895]: I0129 09:00:04.607877 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:04 crc kubenswrapper[4895]: I0129 09:00:04.609219 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-drdk8" event={"ID":"073f4b22-319f-4cbb-ac96-c0a18da477a6","Type":"ContainerStarted","Data":"7e2d780e1348ea4b1b35358e1a1d3e666cb5ac5988ef46e0a624a93ed019c7cd"} Jan 29 09:00:04 crc kubenswrapper[4895]: I0129 09:00:04.656844 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" podStartSLOduration=3.656809424 podStartE2EDuration="3.656809424s" podCreationTimestamp="2026-01-29 09:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:04.638456504 +0000 UTC m=+1146.279964650" watchObservedRunningTime="2026-01-29 09:00:04.656809424 +0000 UTC m=+1146.298317570" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.231192 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.380871 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7da1d197-6753-4aad-b1a4-55cfb0a2f742-secret-volume\") pod \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\" (UID: \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\") " Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.381143 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7da1d197-6753-4aad-b1a4-55cfb0a2f742-config-volume\") pod \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\" (UID: \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\") " Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.381350 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mswmq\" (UniqueName: \"kubernetes.io/projected/7da1d197-6753-4aad-b1a4-55cfb0a2f742-kube-api-access-mswmq\") pod \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\" (UID: \"7da1d197-6753-4aad-b1a4-55cfb0a2f742\") " Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.386397 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da1d197-6753-4aad-b1a4-55cfb0a2f742-config-volume" (OuterVolumeSpecName: "config-volume") pod "7da1d197-6753-4aad-b1a4-55cfb0a2f742" (UID: "7da1d197-6753-4aad-b1a4-55cfb0a2f742"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.418615 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7da1d197-6753-4aad-b1a4-55cfb0a2f742-kube-api-access-mswmq" (OuterVolumeSpecName: "kube-api-access-mswmq") pod "7da1d197-6753-4aad-b1a4-55cfb0a2f742" (UID: "7da1d197-6753-4aad-b1a4-55cfb0a2f742"). InnerVolumeSpecName "kube-api-access-mswmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.419777 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7da1d197-6753-4aad-b1a4-55cfb0a2f742-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7da1d197-6753-4aad-b1a4-55cfb0a2f742" (UID: "7da1d197-6753-4aad-b1a4-55cfb0a2f742"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.487832 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7da1d197-6753-4aad-b1a4-55cfb0a2f742-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.487883 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7da1d197-6753-4aad-b1a4-55cfb0a2f742-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.487899 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mswmq\" (UniqueName: \"kubernetes.io/projected/7da1d197-6753-4aad-b1a4-55cfb0a2f742-kube-api-access-mswmq\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.520502 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f408-account-create-update-9lsgg" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.528682 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4r786" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.541488 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b5f-account-create-update-8gmtx" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.548710 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-eb0f-account-create-update-8hlmq" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.568494 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fzmwt" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.651283 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5f-account-create-update-8gmtx" event={"ID":"42d29420-dc1a-4983-b157-59364db98935","Type":"ContainerDied","Data":"55ffddfede17ef094fbad059189ce3163196cd58a47ced9de00f7f7196f66ca8"} Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.651701 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ffddfede17ef094fbad059189ce3163196cd58a47ced9de00f7f7196f66ca8" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.651786 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b5f-account-create-update-8gmtx" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.656296 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fzmwt" event={"ID":"9d38c12b-6fa7-411f-91bf-a0d0a6ed733c","Type":"ContainerDied","Data":"0bde742f0073e2ade4117f3c788c56d2d1bbb2a405d06f3c2c5fc0602d6ae784"} Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.656342 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bde742f0073e2ade4117f3c788c56d2d1bbb2a405d06f3c2c5fc0602d6ae784" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.656412 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fzmwt" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.660695 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4r786" event={"ID":"ad71f34b-a6b5-4947-9dab-83d1655a8d7a","Type":"ContainerDied","Data":"6c40612421612c14d558e35072270f629a4a9004a04f080f6a96b68d57c6d715"} Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.660759 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c40612421612c14d558e35072270f629a4a9004a04f080f6a96b68d57c6d715" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.660788 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4r786" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.665885 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f408-account-create-update-9lsgg" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.665962 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f408-account-create-update-9lsgg" event={"ID":"1c111f64-ae01-431e-a625-7b2131a90998","Type":"ContainerDied","Data":"3e9b1ad38a047ab1774a04c602c4e1951d5c7a7294439849dff7e1c00538a5cf"} Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.666006 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e9b1ad38a047ab1774a04c602c4e1951d5c7a7294439849dff7e1c00538a5cf" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.669453 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-eb0f-account-create-update-8hlmq" event={"ID":"508f7a95-0307-4a55-8e27-29e44a6823ef","Type":"ContainerDied","Data":"a3ea295a369605593b518b4fd858f1e97bcd3f61f70d4f7b103ac806159525b3"} Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.669498 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3ea295a369605593b518b4fd858f1e97bcd3f61f70d4f7b103ac806159525b3" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.669465 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-eb0f-account-create-update-8hlmq" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.674183 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" event={"ID":"7da1d197-6753-4aad-b1a4-55cfb0a2f742","Type":"ContainerDied","Data":"b73abe3f73c11a4345c2f3ff0cfce8eec2a882ab6e3ab679ccde4aff1ecd8ce9"} Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.674260 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-jz4mz" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.674260 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b73abe3f73c11a4345c2f3ff0cfce8eec2a882ab6e3ab679ccde4aff1ecd8ce9" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.690456 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzz84\" (UniqueName: \"kubernetes.io/projected/9d38c12b-6fa7-411f-91bf-a0d0a6ed733c-kube-api-access-jzz84\") pod \"9d38c12b-6fa7-411f-91bf-a0d0a6ed733c\" (UID: \"9d38c12b-6fa7-411f-91bf-a0d0a6ed733c\") " Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.690574 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad71f34b-a6b5-4947-9dab-83d1655a8d7a-operator-scripts\") pod \"ad71f34b-a6b5-4947-9dab-83d1655a8d7a\" (UID: \"ad71f34b-a6b5-4947-9dab-83d1655a8d7a\") " Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.690643 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk8kb\" (UniqueName: \"kubernetes.io/projected/ad71f34b-a6b5-4947-9dab-83d1655a8d7a-kube-api-access-lk8kb\") pod \"ad71f34b-a6b5-4947-9dab-83d1655a8d7a\" (UID: \"ad71f34b-a6b5-4947-9dab-83d1655a8d7a\") " Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.690713 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d38c12b-6fa7-411f-91bf-a0d0a6ed733c-operator-scripts\") pod \"9d38c12b-6fa7-411f-91bf-a0d0a6ed733c\" (UID: \"9d38c12b-6fa7-411f-91bf-a0d0a6ed733c\") " Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.690768 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c111f64-ae01-431e-a625-7b2131a90998-operator-scripts\") pod \"1c111f64-ae01-431e-a625-7b2131a90998\" (UID: \"1c111f64-ae01-431e-a625-7b2131a90998\") " Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.690959 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508f7a95-0307-4a55-8e27-29e44a6823ef-operator-scripts\") pod \"508f7a95-0307-4a55-8e27-29e44a6823ef\" (UID: \"508f7a95-0307-4a55-8e27-29e44a6823ef\") " Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.690997 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc4bk\" (UniqueName: \"kubernetes.io/projected/42d29420-dc1a-4983-b157-59364db98935-kube-api-access-rc4bk\") pod \"42d29420-dc1a-4983-b157-59364db98935\" (UID: \"42d29420-dc1a-4983-b157-59364db98935\") " Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.691057 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2h6l\" (UniqueName: \"kubernetes.io/projected/1c111f64-ae01-431e-a625-7b2131a90998-kube-api-access-g2h6l\") pod \"1c111f64-ae01-431e-a625-7b2131a90998\" (UID: \"1c111f64-ae01-431e-a625-7b2131a90998\") " Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.691079 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42d29420-dc1a-4983-b157-59364db98935-operator-scripts\") pod \"42d29420-dc1a-4983-b157-59364db98935\" (UID: \"42d29420-dc1a-4983-b157-59364db98935\") " Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.691224 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n6br\" (UniqueName: \"kubernetes.io/projected/508f7a95-0307-4a55-8e27-29e44a6823ef-kube-api-access-2n6br\") pod \"508f7a95-0307-4a55-8e27-29e44a6823ef\" (UID: \"508f7a95-0307-4a55-8e27-29e44a6823ef\") " Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.691419 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad71f34b-a6b5-4947-9dab-83d1655a8d7a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad71f34b-a6b5-4947-9dab-83d1655a8d7a" (UID: "ad71f34b-a6b5-4947-9dab-83d1655a8d7a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.691674 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad71f34b-a6b5-4947-9dab-83d1655a8d7a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.691798 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/508f7a95-0307-4a55-8e27-29e44a6823ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "508f7a95-0307-4a55-8e27-29e44a6823ef" (UID: "508f7a95-0307-4a55-8e27-29e44a6823ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.692163 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d38c12b-6fa7-411f-91bf-a0d0a6ed733c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d38c12b-6fa7-411f-91bf-a0d0a6ed733c" (UID: "9d38c12b-6fa7-411f-91bf-a0d0a6ed733c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.692263 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c111f64-ae01-431e-a625-7b2131a90998-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c111f64-ae01-431e-a625-7b2131a90998" (UID: "1c111f64-ae01-431e-a625-7b2131a90998"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.692878 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42d29420-dc1a-4983-b157-59364db98935-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42d29420-dc1a-4983-b157-59364db98935" (UID: "42d29420-dc1a-4983-b157-59364db98935"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.705573 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d29420-dc1a-4983-b157-59364db98935-kube-api-access-rc4bk" (OuterVolumeSpecName: "kube-api-access-rc4bk") pod "42d29420-dc1a-4983-b157-59364db98935" (UID: "42d29420-dc1a-4983-b157-59364db98935"). InnerVolumeSpecName "kube-api-access-rc4bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.705758 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad71f34b-a6b5-4947-9dab-83d1655a8d7a-kube-api-access-lk8kb" (OuterVolumeSpecName: "kube-api-access-lk8kb") pod "ad71f34b-a6b5-4947-9dab-83d1655a8d7a" (UID: "ad71f34b-a6b5-4947-9dab-83d1655a8d7a"). InnerVolumeSpecName "kube-api-access-lk8kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.705941 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d38c12b-6fa7-411f-91bf-a0d0a6ed733c-kube-api-access-jzz84" (OuterVolumeSpecName: "kube-api-access-jzz84") pod "9d38c12b-6fa7-411f-91bf-a0d0a6ed733c" (UID: "9d38c12b-6fa7-411f-91bf-a0d0a6ed733c"). InnerVolumeSpecName "kube-api-access-jzz84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.711100 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c111f64-ae01-431e-a625-7b2131a90998-kube-api-access-g2h6l" (OuterVolumeSpecName: "kube-api-access-g2h6l") pod "1c111f64-ae01-431e-a625-7b2131a90998" (UID: "1c111f64-ae01-431e-a625-7b2131a90998"). InnerVolumeSpecName "kube-api-access-g2h6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.709170 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/508f7a95-0307-4a55-8e27-29e44a6823ef-kube-api-access-2n6br" (OuterVolumeSpecName: "kube-api-access-2n6br") pod "508f7a95-0307-4a55-8e27-29e44a6823ef" (UID: "508f7a95-0307-4a55-8e27-29e44a6823ef"). InnerVolumeSpecName "kube-api-access-2n6br". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.974893 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk8kb\" (UniqueName: \"kubernetes.io/projected/ad71f34b-a6b5-4947-9dab-83d1655a8d7a-kube-api-access-lk8kb\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.974954 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d38c12b-6fa7-411f-91bf-a0d0a6ed733c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.974966 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c111f64-ae01-431e-a625-7b2131a90998-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.974979 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508f7a95-0307-4a55-8e27-29e44a6823ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.974992 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc4bk\" (UniqueName: \"kubernetes.io/projected/42d29420-dc1a-4983-b157-59364db98935-kube-api-access-rc4bk\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.975003 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2h6l\" (UniqueName: \"kubernetes.io/projected/1c111f64-ae01-431e-a625-7b2131a90998-kube-api-access-g2h6l\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.975015 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42d29420-dc1a-4983-b157-59364db98935-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.975028 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n6br\" (UniqueName: \"kubernetes.io/projected/508f7a95-0307-4a55-8e27-29e44a6823ef-kube-api-access-2n6br\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:05 crc kubenswrapper[4895]: I0129 09:00:05.975039 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzz84\" (UniqueName: \"kubernetes.io/projected/9d38c12b-6fa7-411f-91bf-a0d0a6ed733c-kube-api-access-jzz84\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.069643 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fghpq"] Jan 29 09:00:06 crc kubenswrapper[4895]: E0129 09:00:06.070370 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="508f7a95-0307-4a55-8e27-29e44a6823ef" containerName="mariadb-account-create-update" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.070426 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="508f7a95-0307-4a55-8e27-29e44a6823ef" containerName="mariadb-account-create-update" Jan 29 09:00:06 crc kubenswrapper[4895]: E0129 09:00:06.070451 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da1d197-6753-4aad-b1a4-55cfb0a2f742" containerName="collect-profiles" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.070460 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da1d197-6753-4aad-b1a4-55cfb0a2f742" containerName="collect-profiles" Jan 29 09:00:06 crc kubenswrapper[4895]: E0129 09:00:06.070477 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d29420-dc1a-4983-b157-59364db98935" containerName="mariadb-account-create-update" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.070509 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d29420-dc1a-4983-b157-59364db98935" containerName="mariadb-account-create-update" Jan 29 09:00:06 crc kubenswrapper[4895]: E0129 09:00:06.070524 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d38c12b-6fa7-411f-91bf-a0d0a6ed733c" containerName="mariadb-database-create" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.070529 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d38c12b-6fa7-411f-91bf-a0d0a6ed733c" containerName="mariadb-database-create" Jan 29 09:00:06 crc kubenswrapper[4895]: E0129 09:00:06.070538 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad71f34b-a6b5-4947-9dab-83d1655a8d7a" containerName="mariadb-database-create" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.070544 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad71f34b-a6b5-4947-9dab-83d1655a8d7a" containerName="mariadb-database-create" Jan 29 09:00:06 crc kubenswrapper[4895]: E0129 09:00:06.070555 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c111f64-ae01-431e-a625-7b2131a90998" containerName="mariadb-account-create-update" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.070563 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c111f64-ae01-431e-a625-7b2131a90998" containerName="mariadb-account-create-update" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.070744 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d38c12b-6fa7-411f-91bf-a0d0a6ed733c" containerName="mariadb-database-create" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.070763 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="508f7a95-0307-4a55-8e27-29e44a6823ef" containerName="mariadb-account-create-update" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.070782 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d29420-dc1a-4983-b157-59364db98935" containerName="mariadb-account-create-update" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.070794 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c111f64-ae01-431e-a625-7b2131a90998" containerName="mariadb-account-create-update" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.070808 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7da1d197-6753-4aad-b1a4-55cfb0a2f742" containerName="collect-profiles" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.070827 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad71f34b-a6b5-4947-9dab-83d1655a8d7a" containerName="mariadb-database-create" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.071673 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fghpq" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.075181 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.097912 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fghpq"] Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.196336 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5tlv\" (UniqueName: \"kubernetes.io/projected/b8c9c220-de9a-4346-a0f4-a3b007057c3c-kube-api-access-s5tlv\") pod \"root-account-create-update-fghpq\" (UID: \"b8c9c220-de9a-4346-a0f4-a3b007057c3c\") " pod="openstack/root-account-create-update-fghpq" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.196554 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c9c220-de9a-4346-a0f4-a3b007057c3c-operator-scripts\") pod \"root-account-create-update-fghpq\" (UID: \"b8c9c220-de9a-4346-a0f4-a3b007057c3c\") " pod="openstack/root-account-create-update-fghpq" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.305168 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5tlv\" (UniqueName: \"kubernetes.io/projected/b8c9c220-de9a-4346-a0f4-a3b007057c3c-kube-api-access-s5tlv\") pod \"root-account-create-update-fghpq\" (UID: \"b8c9c220-de9a-4346-a0f4-a3b007057c3c\") " pod="openstack/root-account-create-update-fghpq" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.305268 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c9c220-de9a-4346-a0f4-a3b007057c3c-operator-scripts\") pod \"root-account-create-update-fghpq\" (UID: \"b8c9c220-de9a-4346-a0f4-a3b007057c3c\") " pod="openstack/root-account-create-update-fghpq" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.307324 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c9c220-de9a-4346-a0f4-a3b007057c3c-operator-scripts\") pod \"root-account-create-update-fghpq\" (UID: \"b8c9c220-de9a-4346-a0f4-a3b007057c3c\") " pod="openstack/root-account-create-update-fghpq" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.344174 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5tlv\" (UniqueName: \"kubernetes.io/projected/b8c9c220-de9a-4346-a0f4-a3b007057c3c-kube-api-access-s5tlv\") pod \"root-account-create-update-fghpq\" (UID: \"b8c9c220-de9a-4346-a0f4-a3b007057c3c\") " pod="openstack/root-account-create-update-fghpq" Jan 29 09:00:06 crc kubenswrapper[4895]: E0129 09:00:06.364689 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c111f64_ae01_431e_a625_7b2131a90998.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42d29420_dc1a_4983_b157_59364db98935.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad71f34b_a6b5_4947_9dab_83d1655a8d7a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod508f7a95_0307_4a55_8e27_29e44a6823ef.slice/crio-a3ea295a369605593b518b4fd858f1e97bcd3f61f70d4f7b103ac806159525b3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod508f7a95_0307_4a55_8e27_29e44a6823ef.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d38c12b_6fa7_411f_91bf_a0d0a6ed733c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c111f64_ae01_431e_a625_7b2131a90998.slice/crio-3e9b1ad38a047ab1774a04c602c4e1951d5c7a7294439849dff7e1c00538a5cf\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad71f34b_a6b5_4947_9dab_83d1655a8d7a.slice/crio-6c40612421612c14d558e35072270f629a4a9004a04f080f6a96b68d57c6d715\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42d29420_dc1a_4983_b157_59364db98935.slice/crio-55ffddfede17ef094fbad059189ce3163196cd58a47ced9de00f7f7196f66ca8\": RecentStats: unable to find data in memory cache]" Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.407161 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:06 crc kubenswrapper[4895]: E0129 09:00:06.409451 4895 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 09:00:06 crc kubenswrapper[4895]: E0129 09:00:06.409479 4895 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 09:00:06 crc kubenswrapper[4895]: E0129 09:00:06.409893 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift podName:db0d35a0-7174-452f-bd71-2dae8f7dff11 nodeName:}" failed. No retries permitted until 2026-01-29 09:00:10.4096015 +0000 UTC m=+1152.051109806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift") pod "swift-storage-0" (UID: "db0d35a0-7174-452f-bd71-2dae8f7dff11") : configmap "swift-ring-files" not found Jan 29 09:00:06 crc kubenswrapper[4895]: I0129 09:00:06.505899 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fghpq" Jan 29 09:00:07 crc kubenswrapper[4895]: I0129 09:00:07.005725 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fghpq"] Jan 29 09:00:07 crc kubenswrapper[4895]: I0129 09:00:07.704642 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fghpq" event={"ID":"b8c9c220-de9a-4346-a0f4-a3b007057c3c","Type":"ContainerStarted","Data":"4f8a12db7447d5337b4d7655cd7a1fd7f96361e8a67c40734c577d37d31236cb"} Jan 29 09:00:07 crc kubenswrapper[4895]: I0129 09:00:07.704760 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fghpq" event={"ID":"b8c9c220-de9a-4346-a0f4-a3b007057c3c","Type":"ContainerStarted","Data":"cc77689226b89d2f64e4edc963efee063a975c4e2cb2f08e410aff9e4a8ecba0"} Jan 29 09:00:07 crc kubenswrapper[4895]: I0129 09:00:07.730783 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-fghpq" podStartSLOduration=1.730762369 podStartE2EDuration="1.730762369s" podCreationTimestamp="2026-01-29 09:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:07.721866092 +0000 UTC m=+1149.363374238" watchObservedRunningTime="2026-01-29 09:00:07.730762369 +0000 UTC m=+1149.372270515" Jan 29 09:00:08 crc kubenswrapper[4895]: I0129 09:00:08.717552 4895 generic.go:334] "Generic (PLEG): container finished" podID="b8c9c220-de9a-4346-a0f4-a3b007057c3c" containerID="4f8a12db7447d5337b4d7655cd7a1fd7f96361e8a67c40734c577d37d31236cb" exitCode=0 Jan 29 09:00:08 crc kubenswrapper[4895]: I0129 09:00:08.717613 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fghpq" event={"ID":"b8c9c220-de9a-4346-a0f4-a3b007057c3c","Type":"ContainerDied","Data":"4f8a12db7447d5337b4d7655cd7a1fd7f96361e8a67c40734c577d37d31236cb"} Jan 29 09:00:08 crc kubenswrapper[4895]: I0129 09:00:08.904110 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.287308 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kchhm"] Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.289006 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.294585 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-brtlg" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.294842 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.299900 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kchhm"] Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.458648 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-combined-ca-bundle\") pod \"glance-db-sync-kchhm\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.459181 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvprn\" (UniqueName: \"kubernetes.io/projected/a7e86824-b384-45ea-b4bb-946f795bc9c5-kube-api-access-qvprn\") pod \"glance-db-sync-kchhm\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.459252 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-db-sync-config-data\") pod \"glance-db-sync-kchhm\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.459285 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-config-data\") pod \"glance-db-sync-kchhm\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.561179 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-combined-ca-bundle\") pod \"glance-db-sync-kchhm\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.561252 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvprn\" (UniqueName: \"kubernetes.io/projected/a7e86824-b384-45ea-b4bb-946f795bc9c5-kube-api-access-qvprn\") pod \"glance-db-sync-kchhm\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.561279 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-db-sync-config-data\") pod \"glance-db-sync-kchhm\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.561304 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-config-data\") pod \"glance-db-sync-kchhm\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.573902 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-config-data\") pod \"glance-db-sync-kchhm\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.582817 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-combined-ca-bundle\") pod \"glance-db-sync-kchhm\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.585655 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-db-sync-config-data\") pod \"glance-db-sync-kchhm\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.589375 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvprn\" (UniqueName: \"kubernetes.io/projected/a7e86824-b384-45ea-b4bb-946f795bc9c5-kube-api-access-qvprn\") pod \"glance-db-sync-kchhm\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:09 crc kubenswrapper[4895]: I0129 09:00:09.646363 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:10 crc kubenswrapper[4895]: I0129 09:00:10.422048 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:10 crc kubenswrapper[4895]: E0129 09:00:10.422232 4895 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 09:00:10 crc kubenswrapper[4895]: E0129 09:00:10.422675 4895 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 09:00:10 crc kubenswrapper[4895]: E0129 09:00:10.422734 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift podName:db0d35a0-7174-452f-bd71-2dae8f7dff11 nodeName:}" failed. No retries permitted until 2026-01-29 09:00:18.422713645 +0000 UTC m=+1160.064221791 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift") pod "swift-storage-0" (UID: "db0d35a0-7174-452f-bd71-2dae8f7dff11") : configmap "swift-ring-files" not found Jan 29 09:00:10 crc kubenswrapper[4895]: I0129 09:00:10.709521 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fghpq" Jan 29 09:00:10 crc kubenswrapper[4895]: I0129 09:00:10.727161 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c9c220-de9a-4346-a0f4-a3b007057c3c-operator-scripts\") pod \"b8c9c220-de9a-4346-a0f4-a3b007057c3c\" (UID: \"b8c9c220-de9a-4346-a0f4-a3b007057c3c\") " Jan 29 09:00:10 crc kubenswrapper[4895]: I0129 09:00:10.727263 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5tlv\" (UniqueName: \"kubernetes.io/projected/b8c9c220-de9a-4346-a0f4-a3b007057c3c-kube-api-access-s5tlv\") pod \"b8c9c220-de9a-4346-a0f4-a3b007057c3c\" (UID: \"b8c9c220-de9a-4346-a0f4-a3b007057c3c\") " Jan 29 09:00:10 crc kubenswrapper[4895]: I0129 09:00:10.728752 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8c9c220-de9a-4346-a0f4-a3b007057c3c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8c9c220-de9a-4346-a0f4-a3b007057c3c" (UID: "b8c9c220-de9a-4346-a0f4-a3b007057c3c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:10 crc kubenswrapper[4895]: I0129 09:00:10.736547 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8c9c220-de9a-4346-a0f4-a3b007057c3c-kube-api-access-s5tlv" (OuterVolumeSpecName: "kube-api-access-s5tlv") pod "b8c9c220-de9a-4346-a0f4-a3b007057c3c" (UID: "b8c9c220-de9a-4346-a0f4-a3b007057c3c"). InnerVolumeSpecName "kube-api-access-s5tlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:10 crc kubenswrapper[4895]: I0129 09:00:10.761535 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fghpq" event={"ID":"b8c9c220-de9a-4346-a0f4-a3b007057c3c","Type":"ContainerDied","Data":"cc77689226b89d2f64e4edc963efee063a975c4e2cb2f08e410aff9e4a8ecba0"} Jan 29 09:00:10 crc kubenswrapper[4895]: I0129 09:00:10.761601 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc77689226b89d2f64e4edc963efee063a975c4e2cb2f08e410aff9e4a8ecba0" Jan 29 09:00:10 crc kubenswrapper[4895]: I0129 09:00:10.761684 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fghpq" Jan 29 09:00:10 crc kubenswrapper[4895]: I0129 09:00:10.830260 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c9c220-de9a-4346-a0f4-a3b007057c3c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:10 crc kubenswrapper[4895]: I0129 09:00:10.830294 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5tlv\" (UniqueName: \"kubernetes.io/projected/b8c9c220-de9a-4346-a0f4-a3b007057c3c-kube-api-access-s5tlv\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:11 crc kubenswrapper[4895]: I0129 09:00:11.203632 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kchhm"] Jan 29 09:00:11 crc kubenswrapper[4895]: W0129 09:00:11.216090 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7e86824_b384_45ea_b4bb_946f795bc9c5.slice/crio-a8db93a23832116e787dc2f8f319816de9698fb6772d312fe9627c16a4e79396 WatchSource:0}: Error finding container a8db93a23832116e787dc2f8f319816de9698fb6772d312fe9627c16a4e79396: Status 404 returned error can't find the container with id a8db93a23832116e787dc2f8f319816de9698fb6772d312fe9627c16a4e79396 Jan 29 09:00:11 crc kubenswrapper[4895]: I0129 09:00:11.772941 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-drdk8" event={"ID":"073f4b22-319f-4cbb-ac96-c0a18da477a6","Type":"ContainerStarted","Data":"1debacd0ed7c3ac16c3cf040f0bbfeb8572200da97d8b4c9b796e54945ccbc29"} Jan 29 09:00:11 crc kubenswrapper[4895]: I0129 09:00:11.774551 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kchhm" event={"ID":"a7e86824-b384-45ea-b4bb-946f795bc9c5","Type":"ContainerStarted","Data":"a8db93a23832116e787dc2f8f319816de9698fb6772d312fe9627c16a4e79396"} Jan 29 09:00:11 crc kubenswrapper[4895]: I0129 09:00:11.803840 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-drdk8" podStartSLOduration=2.773974295 podStartE2EDuration="9.803818647s" podCreationTimestamp="2026-01-29 09:00:02 +0000 UTC" firstStartedPulling="2026-01-29 09:00:03.723158861 +0000 UTC m=+1145.364667007" lastFinishedPulling="2026-01-29 09:00:10.753003213 +0000 UTC m=+1152.394511359" observedRunningTime="2026-01-29 09:00:11.79794738 +0000 UTC m=+1153.439455526" watchObservedRunningTime="2026-01-29 09:00:11.803818647 +0000 UTC m=+1153.445326793" Jan 29 09:00:11 crc kubenswrapper[4895]: I0129 09:00:11.909162 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fghpq"] Jan 29 09:00:11 crc kubenswrapper[4895]: I0129 09:00:11.921760 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fghpq"] Jan 29 09:00:11 crc kubenswrapper[4895]: I0129 09:00:11.982193 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:12 crc kubenswrapper[4895]: I0129 09:00:12.059573 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-k965n"] Jan 29 09:00:12 crc kubenswrapper[4895]: I0129 09:00:12.060027 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-k965n" podUID="f8d33507-d01b-456f-b8af-8e61c9461ac0" containerName="dnsmasq-dns" containerID="cri-o://6250b68ae8289523ca90a0f14fe623da07746d0612c9ce9d3dac65169d7e6d96" gracePeriod=10 Jan 29 09:00:12 crc kubenswrapper[4895]: I0129 09:00:12.347332 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-k965n" podUID="f8d33507-d01b-456f-b8af-8e61c9461ac0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 29 09:00:12 crc kubenswrapper[4895]: I0129 09:00:12.830273 4895 generic.go:334] "Generic (PLEG): container finished" podID="f8d33507-d01b-456f-b8af-8e61c9461ac0" containerID="6250b68ae8289523ca90a0f14fe623da07746d0612c9ce9d3dac65169d7e6d96" exitCode=0 Jan 29 09:00:12 crc kubenswrapper[4895]: I0129 09:00:12.831370 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-k965n" event={"ID":"f8d33507-d01b-456f-b8af-8e61c9461ac0","Type":"ContainerDied","Data":"6250b68ae8289523ca90a0f14fe623da07746d0612c9ce9d3dac65169d7e6d96"} Jan 29 09:00:12 crc kubenswrapper[4895]: I0129 09:00:12.945108 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 09:00:12 crc kubenswrapper[4895]: I0129 09:00:12.997146 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-ovsdbserver-sb\") pod \"f8d33507-d01b-456f-b8af-8e61c9461ac0\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " Jan 29 09:00:12 crc kubenswrapper[4895]: I0129 09:00:12.997302 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw9cd\" (UniqueName: \"kubernetes.io/projected/f8d33507-d01b-456f-b8af-8e61c9461ac0-kube-api-access-gw9cd\") pod \"f8d33507-d01b-456f-b8af-8e61c9461ac0\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " Jan 29 09:00:12 crc kubenswrapper[4895]: I0129 09:00:12.997372 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-config\") pod \"f8d33507-d01b-456f-b8af-8e61c9461ac0\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " Jan 29 09:00:12 crc kubenswrapper[4895]: I0129 09:00:12.997454 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-ovsdbserver-nb\") pod \"f8d33507-d01b-456f-b8af-8e61c9461ac0\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " Jan 29 09:00:12 crc kubenswrapper[4895]: I0129 09:00:12.997584 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-dns-svc\") pod \"f8d33507-d01b-456f-b8af-8e61c9461ac0\" (UID: \"f8d33507-d01b-456f-b8af-8e61c9461ac0\") " Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.006075 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d33507-d01b-456f-b8af-8e61c9461ac0-kube-api-access-gw9cd" (OuterVolumeSpecName: "kube-api-access-gw9cd") pod "f8d33507-d01b-456f-b8af-8e61c9461ac0" (UID: "f8d33507-d01b-456f-b8af-8e61c9461ac0"). InnerVolumeSpecName "kube-api-access-gw9cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.053039 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f8d33507-d01b-456f-b8af-8e61c9461ac0" (UID: "f8d33507-d01b-456f-b8af-8e61c9461ac0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.055186 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-config" (OuterVolumeSpecName: "config") pod "f8d33507-d01b-456f-b8af-8e61c9461ac0" (UID: "f8d33507-d01b-456f-b8af-8e61c9461ac0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.058344 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f8d33507-d01b-456f-b8af-8e61c9461ac0" (UID: "f8d33507-d01b-456f-b8af-8e61c9461ac0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.063109 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f8d33507-d01b-456f-b8af-8e61c9461ac0" (UID: "f8d33507-d01b-456f-b8af-8e61c9461ac0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.100033 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.100093 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw9cd\" (UniqueName: \"kubernetes.io/projected/f8d33507-d01b-456f-b8af-8e61c9461ac0-kube-api-access-gw9cd\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.100111 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.100129 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.100141 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8d33507-d01b-456f-b8af-8e61c9461ac0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.224213 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8c9c220-de9a-4346-a0f4-a3b007057c3c" path="/var/lib/kubelet/pods/b8c9c220-de9a-4346-a0f4-a3b007057c3c/volumes" Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.842704 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-k965n" event={"ID":"f8d33507-d01b-456f-b8af-8e61c9461ac0","Type":"ContainerDied","Data":"47cb3f3fd10cec92a350a14f6c1ba1eb5168f49c0308fe9b100a7759d5af82c6"} Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.843145 4895 scope.go:117] "RemoveContainer" containerID="6250b68ae8289523ca90a0f14fe623da07746d0612c9ce9d3dac65169d7e6d96" Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.842774 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-k965n" Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.873481 4895 scope.go:117] "RemoveContainer" containerID="aed32ac24683f35d67702df3f02f492f5c5d40ba8de88882168c894d1ffef54e" Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.878677 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-k965n"] Jan 29 09:00:13 crc kubenswrapper[4895]: I0129 09:00:13.887118 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-k965n"] Jan 29 09:00:14 crc kubenswrapper[4895]: I0129 09:00:14.771896 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mjz6w" podUID="5f71eedb-46ac-474f-9d1e-d4909a49e05b" containerName="ovn-controller" probeResult="failure" output=< Jan 29 09:00:14 crc kubenswrapper[4895]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 09:00:14 crc kubenswrapper[4895]: > Jan 29 09:00:14 crc kubenswrapper[4895]: I0129 09:00:14.841433 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 09:00:14 crc kubenswrapper[4895]: I0129 09:00:14.854546 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rzm2l" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.098081 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mjz6w-config-vd5h2"] Jan 29 09:00:15 crc kubenswrapper[4895]: E0129 09:00:15.103551 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d33507-d01b-456f-b8af-8e61c9461ac0" containerName="init" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.103590 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d33507-d01b-456f-b8af-8e61c9461ac0" containerName="init" Jan 29 09:00:15 crc kubenswrapper[4895]: E0129 09:00:15.103618 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c9c220-de9a-4346-a0f4-a3b007057c3c" containerName="mariadb-account-create-update" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.103627 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c9c220-de9a-4346-a0f4-a3b007057c3c" containerName="mariadb-account-create-update" Jan 29 09:00:15 crc kubenswrapper[4895]: E0129 09:00:15.103650 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d33507-d01b-456f-b8af-8e61c9461ac0" containerName="dnsmasq-dns" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.103660 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d33507-d01b-456f-b8af-8e61c9461ac0" containerName="dnsmasq-dns" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.103849 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c9c220-de9a-4346-a0f4-a3b007057c3c" containerName="mariadb-account-create-update" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.103874 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d33507-d01b-456f-b8af-8e61c9461ac0" containerName="dnsmasq-dns" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.104586 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.108452 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.117770 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mjz6w-config-vd5h2"] Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.229681 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d33507-d01b-456f-b8af-8e61c9461ac0" path="/var/lib/kubelet/pods/f8d33507-d01b-456f-b8af-8e61c9461ac0/volumes" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.241693 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c93ab4bd-b2ce-4298-a594-0733ca86f508-additional-scripts\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.242145 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c93ab4bd-b2ce-4298-a594-0733ca86f508-scripts\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.242473 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-run-ovn\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.244038 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-run\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.244122 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-log-ovn\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.244228 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59w9f\" (UniqueName: \"kubernetes.io/projected/c93ab4bd-b2ce-4298-a594-0733ca86f508-kube-api-access-59w9f\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.345513 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-log-ovn\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.345587 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-run\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.345724 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59w9f\" (UniqueName: \"kubernetes.io/projected/c93ab4bd-b2ce-4298-a594-0733ca86f508-kube-api-access-59w9f\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.346177 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-run\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.347350 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c93ab4bd-b2ce-4298-a594-0733ca86f508-additional-scripts\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.347410 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c93ab4bd-b2ce-4298-a594-0733ca86f508-scripts\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.347474 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-run-ovn\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.347630 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-run-ovn\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.348215 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-log-ovn\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.348733 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c93ab4bd-b2ce-4298-a594-0733ca86f508-additional-scripts\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.350839 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c93ab4bd-b2ce-4298-a594-0733ca86f508-scripts\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.374280 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59w9f\" (UniqueName: \"kubernetes.io/projected/c93ab4bd-b2ce-4298-a594-0733ca86f508-kube-api-access-59w9f\") pod \"ovn-controller-mjz6w-config-vd5h2\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.434887 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.876465 4895 generic.go:334] "Generic (PLEG): container finished" podID="cbcad4af-7c93-4d6e-b825-42a586db5d81" containerID="26d34178f24362025be3c60472a15a6f3b96f11f999bca0c1b399079c33299d8" exitCode=0 Jan 29 09:00:15 crc kubenswrapper[4895]: I0129 09:00:15.876577 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cbcad4af-7c93-4d6e-b825-42a586db5d81","Type":"ContainerDied","Data":"26d34178f24362025be3c60472a15a6f3b96f11f999bca0c1b399079c33299d8"} Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.020984 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.021052 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.062908 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mjz6w-config-vd5h2"] Jan 29 09:00:16 crc kubenswrapper[4895]: W0129 09:00:16.064011 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc93ab4bd_b2ce_4298_a594_0733ca86f508.slice/crio-d0a7257140e9e1d59aa3f4182ff5e8c8acf00314510bce7a7581e6d655b1ddc8 WatchSource:0}: Error finding container d0a7257140e9e1d59aa3f4182ff5e8c8acf00314510bce7a7581e6d655b1ddc8: Status 404 returned error can't find the container with id d0a7257140e9e1d59aa3f4182ff5e8c8acf00314510bce7a7581e6d655b1ddc8 Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.899953 4895 generic.go:334] "Generic (PLEG): container finished" podID="c93ab4bd-b2ce-4298-a594-0733ca86f508" containerID="feb1b04e1d1f76a2715d4c8a5ddfd6f4184a37a5d293ae92129d0771b8f3b915" exitCode=0 Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.900145 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mjz6w-config-vd5h2" event={"ID":"c93ab4bd-b2ce-4298-a594-0733ca86f508","Type":"ContainerDied","Data":"feb1b04e1d1f76a2715d4c8a5ddfd6f4184a37a5d293ae92129d0771b8f3b915"} Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.900450 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mjz6w-config-vd5h2" event={"ID":"c93ab4bd-b2ce-4298-a594-0733ca86f508","Type":"ContainerStarted","Data":"d0a7257140e9e1d59aa3f4182ff5e8c8acf00314510bce7a7581e6d655b1ddc8"} Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.903550 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cbcad4af-7c93-4d6e-b825-42a586db5d81","Type":"ContainerStarted","Data":"a79ddc6cc9a8081dcca315fbaf6560ed3ec63f0c7c48656d13a4540ecbf048bd"} Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.903816 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.928189 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8cvs2"] Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.929510 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8cvs2" Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.933525 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.959290 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8cvs2"] Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.971722 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.046875706 podStartE2EDuration="1m22.971694395s" podCreationTimestamp="2026-01-29 08:58:54 +0000 UTC" firstStartedPulling="2026-01-29 08:58:59.775697964 +0000 UTC m=+1081.417206110" lastFinishedPulling="2026-01-29 08:59:40.700516663 +0000 UTC m=+1122.342024799" observedRunningTime="2026-01-29 09:00:16.963547847 +0000 UTC m=+1158.605056003" watchObservedRunningTime="2026-01-29 09:00:16.971694395 +0000 UTC m=+1158.613202541" Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.989052 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7r7x\" (UniqueName: \"kubernetes.io/projected/fe994737-b654-4e7f-bd3f-672e069bdda0-kube-api-access-l7r7x\") pod \"root-account-create-update-8cvs2\" (UID: \"fe994737-b654-4e7f-bd3f-672e069bdda0\") " pod="openstack/root-account-create-update-8cvs2" Jan 29 09:00:16 crc kubenswrapper[4895]: I0129 09:00:16.989118 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe994737-b654-4e7f-bd3f-672e069bdda0-operator-scripts\") pod \"root-account-create-update-8cvs2\" (UID: \"fe994737-b654-4e7f-bd3f-672e069bdda0\") " pod="openstack/root-account-create-update-8cvs2" Jan 29 09:00:17 crc kubenswrapper[4895]: I0129 09:00:17.097266 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7r7x\" (UniqueName: \"kubernetes.io/projected/fe994737-b654-4e7f-bd3f-672e069bdda0-kube-api-access-l7r7x\") pod \"root-account-create-update-8cvs2\" (UID: \"fe994737-b654-4e7f-bd3f-672e069bdda0\") " pod="openstack/root-account-create-update-8cvs2" Jan 29 09:00:17 crc kubenswrapper[4895]: I0129 09:00:17.097390 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe994737-b654-4e7f-bd3f-672e069bdda0-operator-scripts\") pod \"root-account-create-update-8cvs2\" (UID: \"fe994737-b654-4e7f-bd3f-672e069bdda0\") " pod="openstack/root-account-create-update-8cvs2" Jan 29 09:00:17 crc kubenswrapper[4895]: I0129 09:00:17.098487 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe994737-b654-4e7f-bd3f-672e069bdda0-operator-scripts\") pod \"root-account-create-update-8cvs2\" (UID: \"fe994737-b654-4e7f-bd3f-672e069bdda0\") " pod="openstack/root-account-create-update-8cvs2" Jan 29 09:00:17 crc kubenswrapper[4895]: I0129 09:00:17.123886 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7r7x\" (UniqueName: \"kubernetes.io/projected/fe994737-b654-4e7f-bd3f-672e069bdda0-kube-api-access-l7r7x\") pod \"root-account-create-update-8cvs2\" (UID: \"fe994737-b654-4e7f-bd3f-672e069bdda0\") " pod="openstack/root-account-create-update-8cvs2" Jan 29 09:00:17 crc kubenswrapper[4895]: I0129 09:00:17.255795 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8cvs2" Jan 29 09:00:17 crc kubenswrapper[4895]: I0129 09:00:17.898774 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8cvs2"] Jan 29 09:00:18 crc kubenswrapper[4895]: I0129 09:00:18.451372 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:18 crc kubenswrapper[4895]: E0129 09:00:18.451639 4895 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 09:00:18 crc kubenswrapper[4895]: E0129 09:00:18.452126 4895 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 09:00:18 crc kubenswrapper[4895]: E0129 09:00:18.452211 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift podName:db0d35a0-7174-452f-bd71-2dae8f7dff11 nodeName:}" failed. No retries permitted until 2026-01-29 09:00:34.452189763 +0000 UTC m=+1176.093697909 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift") pod "swift-storage-0" (UID: "db0d35a0-7174-452f-bd71-2dae8f7dff11") : configmap "swift-ring-files" not found Jan 29 09:00:19 crc kubenswrapper[4895]: I0129 09:00:19.763162 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-mjz6w" Jan 29 09:00:21 crc kubenswrapper[4895]: I0129 09:00:21.043056 4895 generic.go:334] "Generic (PLEG): container finished" podID="073f4b22-319f-4cbb-ac96-c0a18da477a6" containerID="1debacd0ed7c3ac16c3cf040f0bbfeb8572200da97d8b4c9b796e54945ccbc29" exitCode=0 Jan 29 09:00:21 crc kubenswrapper[4895]: I0129 09:00:21.043141 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-drdk8" event={"ID":"073f4b22-319f-4cbb-ac96-c0a18da477a6","Type":"ContainerDied","Data":"1debacd0ed7c3ac16c3cf040f0bbfeb8572200da97d8b4c9b796e54945ccbc29"} Jan 29 09:00:23 crc kubenswrapper[4895]: I0129 09:00:23.071133 4895 generic.go:334] "Generic (PLEG): container finished" podID="7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" containerID="7c3e0fe6ef1bed526f92c62b23d0efd1cc2b74bb08f91fe399d1c4d8dcb612a5" exitCode=0 Jan 29 09:00:23 crc kubenswrapper[4895]: I0129 09:00:23.071834 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5","Type":"ContainerDied","Data":"7c3e0fe6ef1bed526f92c62b23d0efd1cc2b74bb08f91fe399d1c4d8dcb612a5"} Jan 29 09:00:27 crc kubenswrapper[4895]: E0129 09:00:27.468830 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 29 09:00:27 crc kubenswrapper[4895]: E0129 09:00:27.469572 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvprn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-kchhm_openstack(a7e86824-b384-45ea-b4bb-946f795bc9c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:00:27 crc kubenswrapper[4895]: E0129 09:00:27.471281 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-kchhm" podUID="a7e86824-b384-45ea-b4bb-946f795bc9c5" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.645935 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.681050 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/073f4b22-319f-4cbb-ac96-c0a18da477a6-ring-data-devices\") pod \"073f4b22-319f-4cbb-ac96-c0a18da477a6\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.681729 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjdt4\" (UniqueName: \"kubernetes.io/projected/073f4b22-319f-4cbb-ac96-c0a18da477a6-kube-api-access-rjdt4\") pod \"073f4b22-319f-4cbb-ac96-c0a18da477a6\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.681889 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/073f4b22-319f-4cbb-ac96-c0a18da477a6-scripts\") pod \"073f4b22-319f-4cbb-ac96-c0a18da477a6\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.681999 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-combined-ca-bundle\") pod \"073f4b22-319f-4cbb-ac96-c0a18da477a6\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.682058 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-swiftconf\") pod \"073f4b22-319f-4cbb-ac96-c0a18da477a6\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.682184 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-dispersionconf\") pod \"073f4b22-319f-4cbb-ac96-c0a18da477a6\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.682216 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/073f4b22-319f-4cbb-ac96-c0a18da477a6-etc-swift\") pod \"073f4b22-319f-4cbb-ac96-c0a18da477a6\" (UID: \"073f4b22-319f-4cbb-ac96-c0a18da477a6\") " Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.682756 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/073f4b22-319f-4cbb-ac96-c0a18da477a6-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "073f4b22-319f-4cbb-ac96-c0a18da477a6" (UID: "073f4b22-319f-4cbb-ac96-c0a18da477a6"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.683726 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/073f4b22-319f-4cbb-ac96-c0a18da477a6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "073f4b22-319f-4cbb-ac96-c0a18da477a6" (UID: "073f4b22-319f-4cbb-ac96-c0a18da477a6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.691673 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.694519 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073f4b22-319f-4cbb-ac96-c0a18da477a6-kube-api-access-rjdt4" (OuterVolumeSpecName: "kube-api-access-rjdt4") pod "073f4b22-319f-4cbb-ac96-c0a18da477a6" (UID: "073f4b22-319f-4cbb-ac96-c0a18da477a6"). InnerVolumeSpecName "kube-api-access-rjdt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.707406 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "073f4b22-319f-4cbb-ac96-c0a18da477a6" (UID: "073f4b22-319f-4cbb-ac96-c0a18da477a6"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.764017 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "073f4b22-319f-4cbb-ac96-c0a18da477a6" (UID: "073f4b22-319f-4cbb-ac96-c0a18da477a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.783647 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59w9f\" (UniqueName: \"kubernetes.io/projected/c93ab4bd-b2ce-4298-a594-0733ca86f508-kube-api-access-59w9f\") pod \"c93ab4bd-b2ce-4298-a594-0733ca86f508\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.783840 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c93ab4bd-b2ce-4298-a594-0733ca86f508-scripts\") pod \"c93ab4bd-b2ce-4298-a594-0733ca86f508\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.783869 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-log-ovn\") pod \"c93ab4bd-b2ce-4298-a594-0733ca86f508\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.783904 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-run\") pod \"c93ab4bd-b2ce-4298-a594-0733ca86f508\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.784006 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-run-ovn\") pod \"c93ab4bd-b2ce-4298-a594-0733ca86f508\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.784088 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c93ab4bd-b2ce-4298-a594-0733ca86f508-additional-scripts\") pod \"c93ab4bd-b2ce-4298-a594-0733ca86f508\" (UID: \"c93ab4bd-b2ce-4298-a594-0733ca86f508\") " Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.784522 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c93ab4bd-b2ce-4298-a594-0733ca86f508" (UID: "c93ab4bd-b2ce-4298-a594-0733ca86f508"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.784611 4895 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.784633 4895 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/073f4b22-319f-4cbb-ac96-c0a18da477a6-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.784647 4895 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/073f4b22-319f-4cbb-ac96-c0a18da477a6-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.784660 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjdt4\" (UniqueName: \"kubernetes.io/projected/073f4b22-319f-4cbb-ac96-c0a18da477a6-kube-api-access-rjdt4\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.784674 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.784707 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-run" (OuterVolumeSpecName: "var-run") pod "c93ab4bd-b2ce-4298-a594-0733ca86f508" (UID: "c93ab4bd-b2ce-4298-a594-0733ca86f508"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.784729 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c93ab4bd-b2ce-4298-a594-0733ca86f508" (UID: "c93ab4bd-b2ce-4298-a594-0733ca86f508"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.786495 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c93ab4bd-b2ce-4298-a594-0733ca86f508-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c93ab4bd-b2ce-4298-a594-0733ca86f508" (UID: "c93ab4bd-b2ce-4298-a594-0733ca86f508"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.787098 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c93ab4bd-b2ce-4298-a594-0733ca86f508-scripts" (OuterVolumeSpecName: "scripts") pod "c93ab4bd-b2ce-4298-a594-0733ca86f508" (UID: "c93ab4bd-b2ce-4298-a594-0733ca86f508"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.790025 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "073f4b22-319f-4cbb-ac96-c0a18da477a6" (UID: "073f4b22-319f-4cbb-ac96-c0a18da477a6"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.792793 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c93ab4bd-b2ce-4298-a594-0733ca86f508-kube-api-access-59w9f" (OuterVolumeSpecName: "kube-api-access-59w9f") pod "c93ab4bd-b2ce-4298-a594-0733ca86f508" (UID: "c93ab4bd-b2ce-4298-a594-0733ca86f508"). InnerVolumeSpecName "kube-api-access-59w9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.793022 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/073f4b22-319f-4cbb-ac96-c0a18da477a6-scripts" (OuterVolumeSpecName: "scripts") pod "073f4b22-319f-4cbb-ac96-c0a18da477a6" (UID: "073f4b22-319f-4cbb-ac96-c0a18da477a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.886556 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c93ab4bd-b2ce-4298-a594-0733ca86f508-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.886614 4895 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.886629 4895 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.886641 4895 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c93ab4bd-b2ce-4298-a594-0733ca86f508-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.886656 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/073f4b22-319f-4cbb-ac96-c0a18da477a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.886668 4895 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c93ab4bd-b2ce-4298-a594-0733ca86f508-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.886682 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59w9f\" (UniqueName: \"kubernetes.io/projected/c93ab4bd-b2ce-4298-a594-0733ca86f508-kube-api-access-59w9f\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:27 crc kubenswrapper[4895]: I0129 09:00:27.886695 4895 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/073f4b22-319f-4cbb-ac96-c0a18da477a6-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.122875 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5","Type":"ContainerStarted","Data":"78b578d09bb9a0244c465780f9b1e9a302262947c9504c01f4ba2604b679e677"} Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.123228 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.124266 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mjz6w-config-vd5h2" event={"ID":"c93ab4bd-b2ce-4298-a594-0733ca86f508","Type":"ContainerDied","Data":"d0a7257140e9e1d59aa3f4182ff5e8c8acf00314510bce7a7581e6d655b1ddc8"} Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.124294 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0a7257140e9e1d59aa3f4182ff5e8c8acf00314510bce7a7581e6d655b1ddc8" Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.124302 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mjz6w-config-vd5h2" Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.125717 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-drdk8" Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.125717 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-drdk8" event={"ID":"073f4b22-319f-4cbb-ac96-c0a18da477a6","Type":"ContainerDied","Data":"7e2d780e1348ea4b1b35358e1a1d3e666cb5ac5988ef46e0a624a93ed019c7cd"} Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.125860 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e2d780e1348ea4b1b35358e1a1d3e666cb5ac5988ef46e0a624a93ed019c7cd" Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.127422 4895 generic.go:334] "Generic (PLEG): container finished" podID="fe994737-b654-4e7f-bd3f-672e069bdda0" containerID="be7ada56dc9f66f2c8ee6c305dd374dd9bf5b9a47e7763b4172ec47cfeec8590" exitCode=0 Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.127450 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8cvs2" event={"ID":"fe994737-b654-4e7f-bd3f-672e069bdda0","Type":"ContainerDied","Data":"be7ada56dc9f66f2c8ee6c305dd374dd9bf5b9a47e7763b4172ec47cfeec8590"} Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.127528 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8cvs2" event={"ID":"fe994737-b654-4e7f-bd3f-672e069bdda0","Type":"ContainerStarted","Data":"3d354c77d6349fbb6cba54a30717b5f5cb347b4f0f1ff527bf158f8aa702eaba"} Jan 29 09:00:28 crc kubenswrapper[4895]: E0129 09:00:28.129892 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-kchhm" podUID="a7e86824-b384-45ea-b4bb-946f795bc9c5" Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.163527 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371942.691273 podStartE2EDuration="1m34.163502581s" podCreationTimestamp="2026-01-29 08:58:54 +0000 UTC" firstStartedPulling="2026-01-29 08:58:56.608982369 +0000 UTC m=+1078.250490515" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:28.154556161 +0000 UTC m=+1169.796064327" watchObservedRunningTime="2026-01-29 09:00:28.163502581 +0000 UTC m=+1169.805010737" Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.455230 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.818602 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mjz6w-config-vd5h2"] Jan 29 09:00:28 crc kubenswrapper[4895]: I0129 09:00:28.828701 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mjz6w-config-vd5h2"] Jan 29 09:00:29 crc kubenswrapper[4895]: I0129 09:00:29.288348 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c93ab4bd-b2ce-4298-a594-0733ca86f508" path="/var/lib/kubelet/pods/c93ab4bd-b2ce-4298-a594-0733ca86f508/volumes" Jan 29 09:00:29 crc kubenswrapper[4895]: I0129 09:00:29.763137 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8cvs2" Jan 29 09:00:29 crc kubenswrapper[4895]: I0129 09:00:29.883378 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe994737-b654-4e7f-bd3f-672e069bdda0-operator-scripts\") pod \"fe994737-b654-4e7f-bd3f-672e069bdda0\" (UID: \"fe994737-b654-4e7f-bd3f-672e069bdda0\") " Jan 29 09:00:29 crc kubenswrapper[4895]: I0129 09:00:29.883657 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7r7x\" (UniqueName: \"kubernetes.io/projected/fe994737-b654-4e7f-bd3f-672e069bdda0-kube-api-access-l7r7x\") pod \"fe994737-b654-4e7f-bd3f-672e069bdda0\" (UID: \"fe994737-b654-4e7f-bd3f-672e069bdda0\") " Jan 29 09:00:29 crc kubenswrapper[4895]: I0129 09:00:29.884396 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe994737-b654-4e7f-bd3f-672e069bdda0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe994737-b654-4e7f-bd3f-672e069bdda0" (UID: "fe994737-b654-4e7f-bd3f-672e069bdda0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:29 crc kubenswrapper[4895]: I0129 09:00:29.897384 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe994737-b654-4e7f-bd3f-672e069bdda0-kube-api-access-l7r7x" (OuterVolumeSpecName: "kube-api-access-l7r7x") pod "fe994737-b654-4e7f-bd3f-672e069bdda0" (UID: "fe994737-b654-4e7f-bd3f-672e069bdda0"). InnerVolumeSpecName "kube-api-access-l7r7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:29 crc kubenswrapper[4895]: I0129 09:00:29.985884 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7r7x\" (UniqueName: \"kubernetes.io/projected/fe994737-b654-4e7f-bd3f-672e069bdda0-kube-api-access-l7r7x\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:29 crc kubenswrapper[4895]: I0129 09:00:29.986354 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe994737-b654-4e7f-bd3f-672e069bdda0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:30 crc kubenswrapper[4895]: I0129 09:00:30.146185 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8cvs2" event={"ID":"fe994737-b654-4e7f-bd3f-672e069bdda0","Type":"ContainerDied","Data":"3d354c77d6349fbb6cba54a30717b5f5cb347b4f0f1ff527bf158f8aa702eaba"} Jan 29 09:00:30 crc kubenswrapper[4895]: I0129 09:00:30.146251 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d354c77d6349fbb6cba54a30717b5f5cb347b4f0f1ff527bf158f8aa702eaba" Jan 29 09:00:30 crc kubenswrapper[4895]: I0129 09:00:30.146341 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8cvs2" Jan 29 09:00:34 crc kubenswrapper[4895]: I0129 09:00:34.672007 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:34 crc kubenswrapper[4895]: I0129 09:00:34.680858 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/db0d35a0-7174-452f-bd71-2dae8f7dff11-etc-swift\") pod \"swift-storage-0\" (UID: \"db0d35a0-7174-452f-bd71-2dae8f7dff11\") " pod="openstack/swift-storage-0" Jan 29 09:00:34 crc kubenswrapper[4895]: I0129 09:00:34.925099 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 09:00:35 crc kubenswrapper[4895]: I0129 09:00:35.505537 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 09:00:36 crc kubenswrapper[4895]: I0129 09:00:36.378765 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"3facd741e8f8a1e7f076b44adbf71a6eb12e9cae1b2bfe90f5685e543f5dc7fe"} Jan 29 09:00:38 crc kubenswrapper[4895]: I0129 09:00:38.608387 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"d0a1b7c99c17d621dd1bae00d17cdee53002be1dd915aca90289def7e5785da7"} Jan 29 09:00:38 crc kubenswrapper[4895]: I0129 09:00:38.608830 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"14bbcad70dd5a6f64499f055b215de2545241ffcb6a05f6eb97a8daecc916e82"} Jan 29 09:00:38 crc kubenswrapper[4895]: I0129 09:00:38.608845 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"fb5749a041f8089f6e92cf6458627e130ef2cfae5840bbc59384d63f5dc71865"} Jan 29 09:00:38 crc kubenswrapper[4895]: I0129 09:00:38.608856 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"5ad1f06ee17444a66aaaaf128594ee80fde3d77bac88bcce66e1fcde7ff1b146"} Jan 29 09:00:40 crc kubenswrapper[4895]: I0129 09:00:40.654748 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"f865bb2de3d4deaa28f70b75fbfd48be97266c6296359105c442f846c32299d9"} Jan 29 09:00:42 crc kubenswrapper[4895]: I0129 09:00:42.045236 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"548cb1c7efb488c32424a88d2bcade6ead2b24f388e9f537e80db5e51a428744"} Jan 29 09:00:42 crc kubenswrapper[4895]: I0129 09:00:42.046072 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"3ae2dee788d602761f7d1e42cdaaaa245ffd0133dfa02bfd86a73c4a8ae20fe3"} Jan 29 09:00:42 crc kubenswrapper[4895]: I0129 09:00:42.046089 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"7002f6bcf6721da7f31ccc57289f9a5b96628ab11c97f10200d69f24feceb3ca"} Jan 29 09:00:42 crc kubenswrapper[4895]: I0129 09:00:42.048515 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kchhm" event={"ID":"a7e86824-b384-45ea-b4bb-946f795bc9c5","Type":"ContainerStarted","Data":"83eccf0330358d74f34fad459119acc542635abe11e25da3b847b9dfa4ab0517"} Jan 29 09:00:42 crc kubenswrapper[4895]: I0129 09:00:42.077470 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kchhm" podStartSLOduration=4.125447129 podStartE2EDuration="33.077447088s" podCreationTimestamp="2026-01-29 09:00:09 +0000 UTC" firstStartedPulling="2026-01-29 09:00:11.218986207 +0000 UTC m=+1152.860494353" lastFinishedPulling="2026-01-29 09:00:40.170986166 +0000 UTC m=+1181.812494312" observedRunningTime="2026-01-29 09:00:42.065231002 +0000 UTC m=+1183.706739148" watchObservedRunningTime="2026-01-29 09:00:42.077447088 +0000 UTC m=+1183.718955234" Jan 29 09:00:43 crc kubenswrapper[4895]: I0129 09:00:43.069029 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"7990f8b629b44f8c466f3b88c1422facf991ab702496d17874c01039c76ff7fb"} Jan 29 09:00:43 crc kubenswrapper[4895]: I0129 09:00:43.070540 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"1d09ed3aa72079c633dbcd572ed59092b03bab6d8deee7ae89d6121d9037645b"} Jan 29 09:00:44 crc kubenswrapper[4895]: I0129 09:00:44.085401 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"2c92b2d29bdd8d44a22bbb9a7831ed8725ac079a29649631a278f82d4f038cd6"} Jan 29 09:00:44 crc kubenswrapper[4895]: I0129 09:00:44.085906 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"ffbebd79803764221fa115dcc7c2daf53596c15dea4567c1068d09613bd42f3f"} Jan 29 09:00:44 crc kubenswrapper[4895]: I0129 09:00:44.085956 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"9c6adaa558f72a300cbf25d7893b8ee503741e3f6c3bfb2c11dd234df3802e5b"} Jan 29 09:00:45 crc kubenswrapper[4895]: I0129 09:00:45.102160 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"03a6fa27dd540a457b03593408bc74cd0732e8d1bd3bf983cf835b75075932c0"} Jan 29 09:00:45 crc kubenswrapper[4895]: I0129 09:00:45.102566 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"db0d35a0-7174-452f-bd71-2dae8f7dff11","Type":"ContainerStarted","Data":"b3d3b4b9c436268d70e23f2bd0278745b8792632b33e3c86add9da6bcaf49f80"} Jan 29 09:00:45 crc kubenswrapper[4895]: I0129 09:00:45.744320 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 09:00:45 crc kubenswrapper[4895]: I0129 09:00:45.786718 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.69129153 podStartE2EDuration="44.786680252s" podCreationTimestamp="2026-01-29 09:00:01 +0000 UTC" firstStartedPulling="2026-01-29 09:00:35.511545087 +0000 UTC m=+1177.153053233" lastFinishedPulling="2026-01-29 09:00:42.606933809 +0000 UTC m=+1184.248441955" observedRunningTime="2026-01-29 09:00:45.503656738 +0000 UTC m=+1187.145164884" watchObservedRunningTime="2026-01-29 09:00:45.786680252 +0000 UTC m=+1187.428188398" Jan 29 09:00:45 crc kubenswrapper[4895]: I0129 09:00:45.865997 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-v5bw2"] Jan 29 09:00:45 crc kubenswrapper[4895]: E0129 09:00:45.866533 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="073f4b22-319f-4cbb-ac96-c0a18da477a6" containerName="swift-ring-rebalance" Jan 29 09:00:45 crc kubenswrapper[4895]: I0129 09:00:45.866559 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="073f4b22-319f-4cbb-ac96-c0a18da477a6" containerName="swift-ring-rebalance" Jan 29 09:00:45 crc kubenswrapper[4895]: E0129 09:00:45.866573 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c93ab4bd-b2ce-4298-a594-0733ca86f508" containerName="ovn-config" Jan 29 09:00:45 crc kubenswrapper[4895]: I0129 09:00:45.866585 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c93ab4bd-b2ce-4298-a594-0733ca86f508" containerName="ovn-config" Jan 29 09:00:45 crc kubenswrapper[4895]: E0129 09:00:45.866611 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe994737-b654-4e7f-bd3f-672e069bdda0" containerName="mariadb-account-create-update" Jan 29 09:00:45 crc kubenswrapper[4895]: I0129 09:00:45.866619 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe994737-b654-4e7f-bd3f-672e069bdda0" containerName="mariadb-account-create-update" Jan 29 09:00:45 crc kubenswrapper[4895]: I0129 09:00:45.866830 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe994737-b654-4e7f-bd3f-672e069bdda0" containerName="mariadb-account-create-update" Jan 29 09:00:45 crc kubenswrapper[4895]: I0129 09:00:45.866862 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c93ab4bd-b2ce-4298-a594-0733ca86f508" containerName="ovn-config" Jan 29 09:00:45 crc kubenswrapper[4895]: I0129 09:00:45.866875 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="073f4b22-319f-4cbb-ac96-c0a18da477a6" containerName="swift-ring-rebalance" Jan 29 09:00:45 crc kubenswrapper[4895]: I0129 09:00:45.868163 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:45 crc kubenswrapper[4895]: I0129 09:00:45.874956 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 29 09:00:45 crc kubenswrapper[4895]: I0129 09:00:45.922307 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-v5bw2"] Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.018737 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc28t\" (UniqueName: \"kubernetes.io/projected/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-kube-api-access-pc28t\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.018795 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.018821 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-config\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.019075 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.019284 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.019630 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.020352 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.020407 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.020454 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.021763 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac19f0c558f19013faadf18c0d93d61660767ea0e756e78bbf7d902981654a13"} pod="openshift-machine-config-operator/machine-config-daemon-z82hk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.021837 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" containerID="cri-o://ac19f0c558f19013faadf18c0d93d61660767ea0e756e78bbf7d902981654a13" gracePeriod=600 Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.126167 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc28t\" (UniqueName: \"kubernetes.io/projected/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-kube-api-access-pc28t\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.126230 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.126252 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-config\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.126286 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.126396 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.126456 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.131314 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.133792 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.134231 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.134469 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-config\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.134670 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.178054 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc28t\" (UniqueName: \"kubernetes.io/projected/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-kube-api-access-pc28t\") pod \"dnsmasq-dns-5c79d794d7-v5bw2\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.202876 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.430671 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-j6gzt"] Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.433743 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j6gzt" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.551038 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0be94e72-1174-4bf4-8706-387fef234ccb-operator-scripts\") pod \"barbican-db-create-j6gzt\" (UID: \"0be94e72-1174-4bf4-8706-387fef234ccb\") " pod="openstack/barbican-db-create-j6gzt" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.556377 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpd4l\" (UniqueName: \"kubernetes.io/projected/0be94e72-1174-4bf4-8706-387fef234ccb-kube-api-access-vpd4l\") pod \"barbican-db-create-j6gzt\" (UID: \"0be94e72-1174-4bf4-8706-387fef234ccb\") " pod="openstack/barbican-db-create-j6gzt" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.585081 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-j6gzt"] Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.659452 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0be94e72-1174-4bf4-8706-387fef234ccb-operator-scripts\") pod \"barbican-db-create-j6gzt\" (UID: \"0be94e72-1174-4bf4-8706-387fef234ccb\") " pod="openstack/barbican-db-create-j6gzt" Jan 29 09:00:46 crc kubenswrapper[4895]: I0129 09:00:46.659562 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpd4l\" (UniqueName: \"kubernetes.io/projected/0be94e72-1174-4bf4-8706-387fef234ccb-kube-api-access-vpd4l\") pod \"barbican-db-create-j6gzt\" (UID: \"0be94e72-1174-4bf4-8706-387fef234ccb\") " pod="openstack/barbican-db-create-j6gzt" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.062217 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0be94e72-1174-4bf4-8706-387fef234ccb-operator-scripts\") pod \"barbican-db-create-j6gzt\" (UID: \"0be94e72-1174-4bf4-8706-387fef234ccb\") " pod="openstack/barbican-db-create-j6gzt" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.133217 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpd4l\" (UniqueName: \"kubernetes.io/projected/0be94e72-1174-4bf4-8706-387fef234ccb-kube-api-access-vpd4l\") pod \"barbican-db-create-j6gzt\" (UID: \"0be94e72-1174-4bf4-8706-387fef234ccb\") " pod="openstack/barbican-db-create-j6gzt" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.159551 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-da19-account-create-update-vdswv"] Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.161272 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-da19-account-create-update-vdswv" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.165896 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.171846 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-da19-account-create-update-vdswv"] Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.177400 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-pddx8"] Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.178809 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pddx8" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.189688 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerID="ac19f0c558f19013faadf18c0d93d61660767ea0e756e78bbf7d902981654a13" exitCode=0 Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.189743 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerDied","Data":"ac19f0c558f19013faadf18c0d93d61660767ea0e756e78bbf7d902981654a13"} Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.189777 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerStarted","Data":"2bf4fdb573e9460c60a2fe1e2302b28757eefe98ad1ae3c12a1c65609fd1bb38"} Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.189798 4895 scope.go:117] "RemoveContainer" containerID="e283faf84652d2e1164b1f178cfd437682bdf8b7e6ce6e055041db42bca73378" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.197788 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pddx8"] Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.272388 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkc7h\" (UniqueName: \"kubernetes.io/projected/18ef183c-59b1-4d07-831f-db71d6f978b8-kube-api-access-qkc7h\") pod \"cinder-db-create-pddx8\" (UID: \"18ef183c-59b1-4d07-831f-db71d6f978b8\") " pod="openstack/cinder-db-create-pddx8" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.272937 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8sn2\" (UniqueName: \"kubernetes.io/projected/2cac20b0-8086-44e6-8ef4-cf184b849ee3-kube-api-access-n8sn2\") pod \"barbican-da19-account-create-update-vdswv\" (UID: \"2cac20b0-8086-44e6-8ef4-cf184b849ee3\") " pod="openstack/barbican-da19-account-create-update-vdswv" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.272978 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18ef183c-59b1-4d07-831f-db71d6f978b8-operator-scripts\") pod \"cinder-db-create-pddx8\" (UID: \"18ef183c-59b1-4d07-831f-db71d6f978b8\") " pod="openstack/cinder-db-create-pddx8" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.273101 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cac20b0-8086-44e6-8ef4-cf184b849ee3-operator-scripts\") pod \"barbican-da19-account-create-update-vdswv\" (UID: \"2cac20b0-8086-44e6-8ef4-cf184b849ee3\") " pod="openstack/barbican-da19-account-create-update-vdswv" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.352589 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-544ht"] Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.354370 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j6gzt" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.374784 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cac20b0-8086-44e6-8ef4-cf184b849ee3-operator-scripts\") pod \"barbican-da19-account-create-update-vdswv\" (UID: \"2cac20b0-8086-44e6-8ef4-cf184b849ee3\") " pod="openstack/barbican-da19-account-create-update-vdswv" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.374886 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkc7h\" (UniqueName: \"kubernetes.io/projected/18ef183c-59b1-4d07-831f-db71d6f978b8-kube-api-access-qkc7h\") pod \"cinder-db-create-pddx8\" (UID: \"18ef183c-59b1-4d07-831f-db71d6f978b8\") " pod="openstack/cinder-db-create-pddx8" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.374938 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8sn2\" (UniqueName: \"kubernetes.io/projected/2cac20b0-8086-44e6-8ef4-cf184b849ee3-kube-api-access-n8sn2\") pod \"barbican-da19-account-create-update-vdswv\" (UID: \"2cac20b0-8086-44e6-8ef4-cf184b849ee3\") " pod="openstack/barbican-da19-account-create-update-vdswv" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.374978 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18ef183c-59b1-4d07-831f-db71d6f978b8-operator-scripts\") pod \"cinder-db-create-pddx8\" (UID: \"18ef183c-59b1-4d07-831f-db71d6f978b8\") " pod="openstack/cinder-db-create-pddx8" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.376040 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cac20b0-8086-44e6-8ef4-cf184b849ee3-operator-scripts\") pod \"barbican-da19-account-create-update-vdswv\" (UID: \"2cac20b0-8086-44e6-8ef4-cf184b849ee3\") " pod="openstack/barbican-da19-account-create-update-vdswv" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.377453 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-544ht"] Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.377609 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-544ht" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.378756 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18ef183c-59b1-4d07-831f-db71d6f978b8-operator-scripts\") pod \"cinder-db-create-pddx8\" (UID: \"18ef183c-59b1-4d07-831f-db71d6f978b8\") " pod="openstack/cinder-db-create-pddx8" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.403141 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8sn2\" (UniqueName: \"kubernetes.io/projected/2cac20b0-8086-44e6-8ef4-cf184b849ee3-kube-api-access-n8sn2\") pod \"barbican-da19-account-create-update-vdswv\" (UID: \"2cac20b0-8086-44e6-8ef4-cf184b849ee3\") " pod="openstack/barbican-da19-account-create-update-vdswv" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.415304 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkc7h\" (UniqueName: \"kubernetes.io/projected/18ef183c-59b1-4d07-831f-db71d6f978b8-kube-api-access-qkc7h\") pod \"cinder-db-create-pddx8\" (UID: \"18ef183c-59b1-4d07-831f-db71d6f978b8\") " pod="openstack/cinder-db-create-pddx8" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.478332 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9642f6-4325-49a2-bad8-71b3383cc5ca-operator-scripts\") pod \"neutron-db-create-544ht\" (UID: \"dd9642f6-4325-49a2-bad8-71b3383cc5ca\") " pod="openstack/neutron-db-create-544ht" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.478475 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pnx4\" (UniqueName: \"kubernetes.io/projected/dd9642f6-4325-49a2-bad8-71b3383cc5ca-kube-api-access-9pnx4\") pod \"neutron-db-create-544ht\" (UID: \"dd9642f6-4325-49a2-bad8-71b3383cc5ca\") " pod="openstack/neutron-db-create-544ht" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.502941 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-da19-account-create-update-vdswv" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.506203 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d6b8-account-create-update-j4zvh"] Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.507721 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d6b8-account-create-update-j4zvh" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.516822 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.550386 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pddx8" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.554216 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-sd8bs"] Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.556134 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sd8bs" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.560604 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2twgr" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.560955 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.561080 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.565651 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.581140 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9642f6-4325-49a2-bad8-71b3383cc5ca-operator-scripts\") pod \"neutron-db-create-544ht\" (UID: \"dd9642f6-4325-49a2-bad8-71b3383cc5ca\") " pod="openstack/neutron-db-create-544ht" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.581227 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pnx4\" (UniqueName: \"kubernetes.io/projected/dd9642f6-4325-49a2-bad8-71b3383cc5ca-kube-api-access-9pnx4\") pod \"neutron-db-create-544ht\" (UID: \"dd9642f6-4325-49a2-bad8-71b3383cc5ca\") " pod="openstack/neutron-db-create-544ht" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.582652 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9642f6-4325-49a2-bad8-71b3383cc5ca-operator-scripts\") pod \"neutron-db-create-544ht\" (UID: \"dd9642f6-4325-49a2-bad8-71b3383cc5ca\") " pod="openstack/neutron-db-create-544ht" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.590931 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sd8bs"] Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.617817 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d6b8-account-create-update-j4zvh"] Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.652241 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pnx4\" (UniqueName: \"kubernetes.io/projected/dd9642f6-4325-49a2-bad8-71b3383cc5ca-kube-api-access-9pnx4\") pod \"neutron-db-create-544ht\" (UID: \"dd9642f6-4325-49a2-bad8-71b3383cc5ca\") " pod="openstack/neutron-db-create-544ht" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.683846 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-config-data\") pod \"keystone-db-sync-sd8bs\" (UID: \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\") " pod="openstack/keystone-db-sync-sd8bs" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.683904 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5l68\" (UniqueName: \"kubernetes.io/projected/f8d0ec49-1480-4452-a955-1ee612064f8a-kube-api-access-c5l68\") pod \"cinder-d6b8-account-create-update-j4zvh\" (UID: \"f8d0ec49-1480-4452-a955-1ee612064f8a\") " pod="openstack/cinder-d6b8-account-create-update-j4zvh" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.684008 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc64n\" (UniqueName: \"kubernetes.io/projected/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-kube-api-access-wc64n\") pod \"keystone-db-sync-sd8bs\" (UID: \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\") " pod="openstack/keystone-db-sync-sd8bs" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.684046 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-combined-ca-bundle\") pod \"keystone-db-sync-sd8bs\" (UID: \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\") " pod="openstack/keystone-db-sync-sd8bs" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.684095 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d0ec49-1480-4452-a955-1ee612064f8a-operator-scripts\") pod \"cinder-d6b8-account-create-update-j4zvh\" (UID: \"f8d0ec49-1480-4452-a955-1ee612064f8a\") " pod="openstack/cinder-d6b8-account-create-update-j4zvh" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.694716 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-v5bw2"] Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.721423 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d597-account-create-update-ks9x5"] Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.727078 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d597-account-create-update-ks9x5" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.741561 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.743931 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d597-account-create-update-ks9x5"] Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.786892 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-config-data\") pod \"keystone-db-sync-sd8bs\" (UID: \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\") " pod="openstack/keystone-db-sync-sd8bs" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.787046 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5l68\" (UniqueName: \"kubernetes.io/projected/f8d0ec49-1480-4452-a955-1ee612064f8a-kube-api-access-c5l68\") pod \"cinder-d6b8-account-create-update-j4zvh\" (UID: \"f8d0ec49-1480-4452-a955-1ee612064f8a\") " pod="openstack/cinder-d6b8-account-create-update-j4zvh" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.787220 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc64n\" (UniqueName: \"kubernetes.io/projected/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-kube-api-access-wc64n\") pod \"keystone-db-sync-sd8bs\" (UID: \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\") " pod="openstack/keystone-db-sync-sd8bs" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.787317 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-combined-ca-bundle\") pod \"keystone-db-sync-sd8bs\" (UID: \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\") " pod="openstack/keystone-db-sync-sd8bs" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.787429 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d0ec49-1480-4452-a955-1ee612064f8a-operator-scripts\") pod \"cinder-d6b8-account-create-update-j4zvh\" (UID: \"f8d0ec49-1480-4452-a955-1ee612064f8a\") " pod="openstack/cinder-d6b8-account-create-update-j4zvh" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.788554 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d0ec49-1480-4452-a955-1ee612064f8a-operator-scripts\") pod \"cinder-d6b8-account-create-update-j4zvh\" (UID: \"f8d0ec49-1480-4452-a955-1ee612064f8a\") " pod="openstack/cinder-d6b8-account-create-update-j4zvh" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.794093 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-config-data\") pod \"keystone-db-sync-sd8bs\" (UID: \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\") " pod="openstack/keystone-db-sync-sd8bs" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.797368 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-combined-ca-bundle\") pod \"keystone-db-sync-sd8bs\" (UID: \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\") " pod="openstack/keystone-db-sync-sd8bs" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.811872 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5l68\" (UniqueName: \"kubernetes.io/projected/f8d0ec49-1480-4452-a955-1ee612064f8a-kube-api-access-c5l68\") pod \"cinder-d6b8-account-create-update-j4zvh\" (UID: \"f8d0ec49-1480-4452-a955-1ee612064f8a\") " pod="openstack/cinder-d6b8-account-create-update-j4zvh" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.814879 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc64n\" (UniqueName: \"kubernetes.io/projected/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-kube-api-access-wc64n\") pod \"keystone-db-sync-sd8bs\" (UID: \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\") " pod="openstack/keystone-db-sync-sd8bs" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.838706 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-544ht" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.877661 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d6b8-account-create-update-j4zvh" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.894612 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb-operator-scripts\") pod \"neutron-d597-account-create-update-ks9x5\" (UID: \"f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb\") " pod="openstack/neutron-d597-account-create-update-ks9x5" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.900771 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glrjk\" (UniqueName: \"kubernetes.io/projected/f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb-kube-api-access-glrjk\") pod \"neutron-d597-account-create-update-ks9x5\" (UID: \"f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb\") " pod="openstack/neutron-d597-account-create-update-ks9x5" Jan 29 09:00:47 crc kubenswrapper[4895]: I0129 09:00:47.964633 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-j6gzt"] Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.012825 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glrjk\" (UniqueName: \"kubernetes.io/projected/f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb-kube-api-access-glrjk\") pod \"neutron-d597-account-create-update-ks9x5\" (UID: \"f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb\") " pod="openstack/neutron-d597-account-create-update-ks9x5" Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.012969 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb-operator-scripts\") pod \"neutron-d597-account-create-update-ks9x5\" (UID: \"f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb\") " pod="openstack/neutron-d597-account-create-update-ks9x5" Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.014462 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb-operator-scripts\") pod \"neutron-d597-account-create-update-ks9x5\" (UID: \"f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb\") " pod="openstack/neutron-d597-account-create-update-ks9x5" Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.032885 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sd8bs" Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.054256 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glrjk\" (UniqueName: \"kubernetes.io/projected/f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb-kube-api-access-glrjk\") pod \"neutron-d597-account-create-update-ks9x5\" (UID: \"f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb\") " pod="openstack/neutron-d597-account-create-update-ks9x5" Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.065555 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d597-account-create-update-ks9x5" Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.218993 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j6gzt" event={"ID":"0be94e72-1174-4bf4-8706-387fef234ccb","Type":"ContainerStarted","Data":"710b862240f4c9dbc3952bb721f37862c45a348692631a54b44a6015eb6c9c75"} Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.227694 4895 generic.go:334] "Generic (PLEG): container finished" podID="c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" containerID="8a484f4c6b2f116bf5606f0f05902166a8551c19293ae4490c591cb360bcea37" exitCode=0 Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.229168 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" event={"ID":"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9","Type":"ContainerDied","Data":"8a484f4c6b2f116bf5606f0f05902166a8551c19293ae4490c591cb360bcea37"} Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.229217 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" event={"ID":"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9","Type":"ContainerStarted","Data":"1e6d82e1914123b0aafa367af25c4231e4a0c0e3c27b9c5fd532985dec67ea3b"} Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.343373 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pddx8"] Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.382714 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-da19-account-create-update-vdswv"] Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.672211 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-544ht"] Jan 29 09:00:48 crc kubenswrapper[4895]: W0129 09:00:48.675644 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d0ec49_1480_4452_a955_1ee612064f8a.slice/crio-a675eca89e1bf86ba7076ccda37b8b1d9edcf7640ac0bc7b1446f4a4d2bab584 WatchSource:0}: Error finding container a675eca89e1bf86ba7076ccda37b8b1d9edcf7640ac0bc7b1446f4a4d2bab584: Status 404 returned error can't find the container with id a675eca89e1bf86ba7076ccda37b8b1d9edcf7640ac0bc7b1446f4a4d2bab584 Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.680222 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d6b8-account-create-update-j4zvh"] Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.852226 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sd8bs"] Jan 29 09:00:48 crc kubenswrapper[4895]: I0129 09:00:48.868640 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d597-account-create-update-ks9x5"] Jan 29 09:00:48 crc kubenswrapper[4895]: W0129 09:00:48.882773 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc7fffcb_533d_439c_ae0d_7b0bbe9d5480.slice/crio-9f5ca5776822324aebb9c5318c9e409c1687dc7f2b32bb87f2207dbeba7d605b WatchSource:0}: Error finding container 9f5ca5776822324aebb9c5318c9e409c1687dc7f2b32bb87f2207dbeba7d605b: Status 404 returned error can't find the container with id 9f5ca5776822324aebb9c5318c9e409c1687dc7f2b32bb87f2207dbeba7d605b Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.265467 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-544ht" event={"ID":"dd9642f6-4325-49a2-bad8-71b3383cc5ca","Type":"ContainerStarted","Data":"14c6f962c783405a26a2e2bba49d2d196e112a3c845a9605da3feb0f00d032b2"} Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.269276 4895 generic.go:334] "Generic (PLEG): container finished" podID="0be94e72-1174-4bf4-8706-387fef234ccb" containerID="677d52ec519ee0e2c063f859d7b06ac85224fc30cb82f12b7b8b32a441406836" exitCode=0 Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.269340 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j6gzt" event={"ID":"0be94e72-1174-4bf4-8706-387fef234ccb","Type":"ContainerDied","Data":"677d52ec519ee0e2c063f859d7b06ac85224fc30cb82f12b7b8b32a441406836"} Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.272483 4895 generic.go:334] "Generic (PLEG): container finished" podID="18ef183c-59b1-4d07-831f-db71d6f978b8" containerID="6a10d63066b10c1ed66835ba12fc34347a0d1a62a87d1b1a284cc96a885d4cfd" exitCode=0 Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.272560 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pddx8" event={"ID":"18ef183c-59b1-4d07-831f-db71d6f978b8","Type":"ContainerDied","Data":"6a10d63066b10c1ed66835ba12fc34347a0d1a62a87d1b1a284cc96a885d4cfd"} Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.272594 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pddx8" event={"ID":"18ef183c-59b1-4d07-831f-db71d6f978b8","Type":"ContainerStarted","Data":"1d72ade36475bfe33a59d2d3500f052df8b606ab35f0f8e9a7dfb5cb32f5944c"} Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.276626 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" event={"ID":"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9","Type":"ContainerStarted","Data":"4e3780dfeca1f60ed185c9b8eaf152e2345370a2ad6837321c72cc73db13bed2"} Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.276780 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.278846 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-da19-account-create-update-vdswv" event={"ID":"2cac20b0-8086-44e6-8ef4-cf184b849ee3","Type":"ContainerStarted","Data":"70893ed3120f75e62eaa24fc4bcc1664f9fdeb2a7b0c4a20ef73232c1787a9d5"} Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.278877 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-da19-account-create-update-vdswv" event={"ID":"2cac20b0-8086-44e6-8ef4-cf184b849ee3","Type":"ContainerStarted","Data":"80425ec5c13b53c8071efeb010685758996bd8dccdd3085c6dd27c29ca50e815"} Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.281413 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d6b8-account-create-update-j4zvh" event={"ID":"f8d0ec49-1480-4452-a955-1ee612064f8a","Type":"ContainerStarted","Data":"a675eca89e1bf86ba7076ccda37b8b1d9edcf7640ac0bc7b1446f4a4d2bab584"} Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.294459 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sd8bs" event={"ID":"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480","Type":"ContainerStarted","Data":"9f5ca5776822324aebb9c5318c9e409c1687dc7f2b32bb87f2207dbeba7d605b"} Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.295642 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d597-account-create-update-ks9x5" event={"ID":"f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb","Type":"ContainerStarted","Data":"effd75acd15a46a0830ac0e9386f4710d44ae8b418db70fb95a27cc0cf1a624c"} Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.451847 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-da19-account-create-update-vdswv" podStartSLOduration=3.451817557 podStartE2EDuration="3.451817557s" podCreationTimestamp="2026-01-29 09:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:49.449502385 +0000 UTC m=+1191.091010531" watchObservedRunningTime="2026-01-29 09:00:49.451817557 +0000 UTC m=+1191.093325703" Jan 29 09:00:49 crc kubenswrapper[4895]: I0129 09:00:49.514259 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" podStartSLOduration=4.514222646 podStartE2EDuration="4.514222646s" podCreationTimestamp="2026-01-29 09:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:49.509987673 +0000 UTC m=+1191.151495819" watchObservedRunningTime="2026-01-29 09:00:49.514222646 +0000 UTC m=+1191.155730792" Jan 29 09:00:50 crc kubenswrapper[4895]: I0129 09:00:50.324673 4895 generic.go:334] "Generic (PLEG): container finished" podID="2cac20b0-8086-44e6-8ef4-cf184b849ee3" containerID="70893ed3120f75e62eaa24fc4bcc1664f9fdeb2a7b0c4a20ef73232c1787a9d5" exitCode=0 Jan 29 09:00:50 crc kubenswrapper[4895]: I0129 09:00:50.325360 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-da19-account-create-update-vdswv" event={"ID":"2cac20b0-8086-44e6-8ef4-cf184b849ee3","Type":"ContainerDied","Data":"70893ed3120f75e62eaa24fc4bcc1664f9fdeb2a7b0c4a20ef73232c1787a9d5"} Jan 29 09:00:50 crc kubenswrapper[4895]: I0129 09:00:50.330977 4895 generic.go:334] "Generic (PLEG): container finished" podID="f8d0ec49-1480-4452-a955-1ee612064f8a" containerID="3f5b6e9f53a8a3c06bf166d8e38ab7424f333f546481bf66b53683e2d1930742" exitCode=0 Jan 29 09:00:50 crc kubenswrapper[4895]: I0129 09:00:50.331164 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d6b8-account-create-update-j4zvh" event={"ID":"f8d0ec49-1480-4452-a955-1ee612064f8a","Type":"ContainerDied","Data":"3f5b6e9f53a8a3c06bf166d8e38ab7424f333f546481bf66b53683e2d1930742"} Jan 29 09:00:50 crc kubenswrapper[4895]: I0129 09:00:50.336189 4895 generic.go:334] "Generic (PLEG): container finished" podID="f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb" containerID="c1f0cb2ad269867bb442306bd648c59a4e7c43f2d5bdc95b54c210db4c2a7dfd" exitCode=0 Jan 29 09:00:50 crc kubenswrapper[4895]: I0129 09:00:50.336666 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d597-account-create-update-ks9x5" event={"ID":"f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb","Type":"ContainerDied","Data":"c1f0cb2ad269867bb442306bd648c59a4e7c43f2d5bdc95b54c210db4c2a7dfd"} Jan 29 09:00:50 crc kubenswrapper[4895]: I0129 09:00:50.351823 4895 generic.go:334] "Generic (PLEG): container finished" podID="dd9642f6-4325-49a2-bad8-71b3383cc5ca" containerID="1d0f8de9b213f39cab42aee331be1f6bd5501ff5305cef3b30a26d261f8a0e63" exitCode=0 Jan 29 09:00:50 crc kubenswrapper[4895]: I0129 09:00:50.352826 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-544ht" event={"ID":"dd9642f6-4325-49a2-bad8-71b3383cc5ca","Type":"ContainerDied","Data":"1d0f8de9b213f39cab42aee331be1f6bd5501ff5305cef3b30a26d261f8a0e63"} Jan 29 09:00:50 crc kubenswrapper[4895]: I0129 09:00:50.909249 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j6gzt" Jan 29 09:00:50 crc kubenswrapper[4895]: I0129 09:00:50.918203 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pddx8" Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.004751 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpd4l\" (UniqueName: \"kubernetes.io/projected/0be94e72-1174-4bf4-8706-387fef234ccb-kube-api-access-vpd4l\") pod \"0be94e72-1174-4bf4-8706-387fef234ccb\" (UID: \"0be94e72-1174-4bf4-8706-387fef234ccb\") " Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.004935 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0be94e72-1174-4bf4-8706-387fef234ccb-operator-scripts\") pod \"0be94e72-1174-4bf4-8706-387fef234ccb\" (UID: \"0be94e72-1174-4bf4-8706-387fef234ccb\") " Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.004994 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18ef183c-59b1-4d07-831f-db71d6f978b8-operator-scripts\") pod \"18ef183c-59b1-4d07-831f-db71d6f978b8\" (UID: \"18ef183c-59b1-4d07-831f-db71d6f978b8\") " Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.005231 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkc7h\" (UniqueName: \"kubernetes.io/projected/18ef183c-59b1-4d07-831f-db71d6f978b8-kube-api-access-qkc7h\") pod \"18ef183c-59b1-4d07-831f-db71d6f978b8\" (UID: \"18ef183c-59b1-4d07-831f-db71d6f978b8\") " Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.005991 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0be94e72-1174-4bf4-8706-387fef234ccb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0be94e72-1174-4bf4-8706-387fef234ccb" (UID: "0be94e72-1174-4bf4-8706-387fef234ccb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.006060 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18ef183c-59b1-4d07-831f-db71d6f978b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18ef183c-59b1-4d07-831f-db71d6f978b8" (UID: "18ef183c-59b1-4d07-831f-db71d6f978b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.014483 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ef183c-59b1-4d07-831f-db71d6f978b8-kube-api-access-qkc7h" (OuterVolumeSpecName: "kube-api-access-qkc7h") pod "18ef183c-59b1-4d07-831f-db71d6f978b8" (UID: "18ef183c-59b1-4d07-831f-db71d6f978b8"). InnerVolumeSpecName "kube-api-access-qkc7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.029409 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be94e72-1174-4bf4-8706-387fef234ccb-kube-api-access-vpd4l" (OuterVolumeSpecName: "kube-api-access-vpd4l") pod "0be94e72-1174-4bf4-8706-387fef234ccb" (UID: "0be94e72-1174-4bf4-8706-387fef234ccb"). InnerVolumeSpecName "kube-api-access-vpd4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.107589 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkc7h\" (UniqueName: \"kubernetes.io/projected/18ef183c-59b1-4d07-831f-db71d6f978b8-kube-api-access-qkc7h\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.107623 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpd4l\" (UniqueName: \"kubernetes.io/projected/0be94e72-1174-4bf4-8706-387fef234ccb-kube-api-access-vpd4l\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.107635 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0be94e72-1174-4bf4-8706-387fef234ccb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.107647 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18ef183c-59b1-4d07-831f-db71d6f978b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.362898 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j6gzt" event={"ID":"0be94e72-1174-4bf4-8706-387fef234ccb","Type":"ContainerDied","Data":"710b862240f4c9dbc3952bb721f37862c45a348692631a54b44a6015eb6c9c75"} Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.363029 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="710b862240f4c9dbc3952bb721f37862c45a348692631a54b44a6015eb6c9c75" Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.363117 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j6gzt" Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.365909 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pddx8" event={"ID":"18ef183c-59b1-4d07-831f-db71d6f978b8","Type":"ContainerDied","Data":"1d72ade36475bfe33a59d2d3500f052df8b606ab35f0f8e9a7dfb5cb32f5944c"} Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.366036 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d72ade36475bfe33a59d2d3500f052df8b606ab35f0f8e9a7dfb5cb32f5944c" Jan 29 09:00:51 crc kubenswrapper[4895]: I0129 09:00:51.366207 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pddx8" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.039346 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d597-account-create-update-ks9x5" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.049565 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-544ht" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.069199 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-da19-account-create-update-vdswv" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.074940 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d6b8-account-create-update-j4zvh" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.204081 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cac20b0-8086-44e6-8ef4-cf184b849ee3-operator-scripts\") pod \"2cac20b0-8086-44e6-8ef4-cf184b849ee3\" (UID: \"2cac20b0-8086-44e6-8ef4-cf184b849ee3\") " Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.204646 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cac20b0-8086-44e6-8ef4-cf184b849ee3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cac20b0-8086-44e6-8ef4-cf184b849ee3" (UID: "2cac20b0-8086-44e6-8ef4-cf184b849ee3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.204993 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d0ec49-1480-4452-a955-1ee612064f8a-operator-scripts\") pod \"f8d0ec49-1480-4452-a955-1ee612064f8a\" (UID: \"f8d0ec49-1480-4452-a955-1ee612064f8a\") " Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.205048 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glrjk\" (UniqueName: \"kubernetes.io/projected/f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb-kube-api-access-glrjk\") pod \"f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb\" (UID: \"f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb\") " Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.205126 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb-operator-scripts\") pod \"f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb\" (UID: \"f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb\") " Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.205426 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d0ec49-1480-4452-a955-1ee612064f8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8d0ec49-1480-4452-a955-1ee612064f8a" (UID: "f8d0ec49-1480-4452-a955-1ee612064f8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.205644 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb" (UID: "f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.205794 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5l68\" (UniqueName: \"kubernetes.io/projected/f8d0ec49-1480-4452-a955-1ee612064f8a-kube-api-access-c5l68\") pod \"f8d0ec49-1480-4452-a955-1ee612064f8a\" (UID: \"f8d0ec49-1480-4452-a955-1ee612064f8a\") " Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.206487 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pnx4\" (UniqueName: \"kubernetes.io/projected/dd9642f6-4325-49a2-bad8-71b3383cc5ca-kube-api-access-9pnx4\") pod \"dd9642f6-4325-49a2-bad8-71b3383cc5ca\" (UID: \"dd9642f6-4325-49a2-bad8-71b3383cc5ca\") " Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.206945 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9642f6-4325-49a2-bad8-71b3383cc5ca-operator-scripts\") pod \"dd9642f6-4325-49a2-bad8-71b3383cc5ca\" (UID: \"dd9642f6-4325-49a2-bad8-71b3383cc5ca\") " Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.207207 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9642f6-4325-49a2-bad8-71b3383cc5ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd9642f6-4325-49a2-bad8-71b3383cc5ca" (UID: "dd9642f6-4325-49a2-bad8-71b3383cc5ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.207432 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8sn2\" (UniqueName: \"kubernetes.io/projected/2cac20b0-8086-44e6-8ef4-cf184b849ee3-kube-api-access-n8sn2\") pod \"2cac20b0-8086-44e6-8ef4-cf184b849ee3\" (UID: \"2cac20b0-8086-44e6-8ef4-cf184b849ee3\") " Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.208724 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.208747 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9642f6-4325-49a2-bad8-71b3383cc5ca-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.208759 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cac20b0-8086-44e6-8ef4-cf184b849ee3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.208770 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d0ec49-1480-4452-a955-1ee612064f8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.214556 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9642f6-4325-49a2-bad8-71b3383cc5ca-kube-api-access-9pnx4" (OuterVolumeSpecName: "kube-api-access-9pnx4") pod "dd9642f6-4325-49a2-bad8-71b3383cc5ca" (UID: "dd9642f6-4325-49a2-bad8-71b3383cc5ca"). InnerVolumeSpecName "kube-api-access-9pnx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.216014 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cac20b0-8086-44e6-8ef4-cf184b849ee3-kube-api-access-n8sn2" (OuterVolumeSpecName: "kube-api-access-n8sn2") pod "2cac20b0-8086-44e6-8ef4-cf184b849ee3" (UID: "2cac20b0-8086-44e6-8ef4-cf184b849ee3"). InnerVolumeSpecName "kube-api-access-n8sn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.217079 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d0ec49-1480-4452-a955-1ee612064f8a-kube-api-access-c5l68" (OuterVolumeSpecName: "kube-api-access-c5l68") pod "f8d0ec49-1480-4452-a955-1ee612064f8a" (UID: "f8d0ec49-1480-4452-a955-1ee612064f8a"). InnerVolumeSpecName "kube-api-access-c5l68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.225249 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb-kube-api-access-glrjk" (OuterVolumeSpecName: "kube-api-access-glrjk") pod "f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb" (UID: "f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb"). InnerVolumeSpecName "kube-api-access-glrjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.310067 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5l68\" (UniqueName: \"kubernetes.io/projected/f8d0ec49-1480-4452-a955-1ee612064f8a-kube-api-access-c5l68\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.310123 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pnx4\" (UniqueName: \"kubernetes.io/projected/dd9642f6-4325-49a2-bad8-71b3383cc5ca-kube-api-access-9pnx4\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.310136 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8sn2\" (UniqueName: \"kubernetes.io/projected/2cac20b0-8086-44e6-8ef4-cf184b849ee3-kube-api-access-n8sn2\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.310150 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glrjk\" (UniqueName: \"kubernetes.io/projected/f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb-kube-api-access-glrjk\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.408949 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d597-account-create-update-ks9x5" event={"ID":"f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb","Type":"ContainerDied","Data":"effd75acd15a46a0830ac0e9386f4710d44ae8b418db70fb95a27cc0cf1a624c"} Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.409040 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="effd75acd15a46a0830ac0e9386f4710d44ae8b418db70fb95a27cc0cf1a624c" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.409036 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d597-account-create-update-ks9x5" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.412310 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-544ht" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.412442 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-544ht" event={"ID":"dd9642f6-4325-49a2-bad8-71b3383cc5ca","Type":"ContainerDied","Data":"14c6f962c783405a26a2e2bba49d2d196e112a3c845a9605da3feb0f00d032b2"} Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.412517 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14c6f962c783405a26a2e2bba49d2d196e112a3c845a9605da3feb0f00d032b2" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.415292 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-da19-account-create-update-vdswv" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.415285 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-da19-account-create-update-vdswv" event={"ID":"2cac20b0-8086-44e6-8ef4-cf184b849ee3","Type":"ContainerDied","Data":"80425ec5c13b53c8071efeb010685758996bd8dccdd3085c6dd27c29ca50e815"} Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.415454 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80425ec5c13b53c8071efeb010685758996bd8dccdd3085c6dd27c29ca50e815" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.419884 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d6b8-account-create-update-j4zvh" event={"ID":"f8d0ec49-1480-4452-a955-1ee612064f8a","Type":"ContainerDied","Data":"a675eca89e1bf86ba7076ccda37b8b1d9edcf7640ac0bc7b1446f4a4d2bab584"} Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.419953 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a675eca89e1bf86ba7076ccda37b8b1d9edcf7640ac0bc7b1446f4a4d2bab584" Jan 29 09:00:55 crc kubenswrapper[4895]: I0129 09:00:55.420035 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d6b8-account-create-update-j4zvh" Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.205382 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.282469 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g84tq"] Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.283736 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" podUID="a9a8b6ea-9175-4086-98ad-a7b35af3798d" containerName="dnsmasq-dns" containerID="cri-o://dc85c6f26ebe5257c5eed4f46629e4041f4e9d089296e541d79b90dd36eea35a" gracePeriod=10 Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.444486 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sd8bs" event={"ID":"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480","Type":"ContainerStarted","Data":"36498f6b0caddda96dc7a8cc3b6b8e4c9ee7fb15d1a37d06719dc74b6f19ffc8"} Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.447274 4895 generic.go:334] "Generic (PLEG): container finished" podID="a7e86824-b384-45ea-b4bb-946f795bc9c5" containerID="83eccf0330358d74f34fad459119acc542635abe11e25da3b847b9dfa4ab0517" exitCode=0 Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.447350 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kchhm" event={"ID":"a7e86824-b384-45ea-b4bb-946f795bc9c5","Type":"ContainerDied","Data":"83eccf0330358d74f34fad459119acc542635abe11e25da3b847b9dfa4ab0517"} Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.458302 4895 generic.go:334] "Generic (PLEG): container finished" podID="a9a8b6ea-9175-4086-98ad-a7b35af3798d" containerID="dc85c6f26ebe5257c5eed4f46629e4041f4e9d089296e541d79b90dd36eea35a" exitCode=0 Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.458438 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" event={"ID":"a9a8b6ea-9175-4086-98ad-a7b35af3798d","Type":"ContainerDied","Data":"dc85c6f26ebe5257c5eed4f46629e4041f4e9d089296e541d79b90dd36eea35a"} Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.467895 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-sd8bs" podStartSLOduration=2.8641121370000002 podStartE2EDuration="9.467807588s" podCreationTimestamp="2026-01-29 09:00:47 +0000 UTC" firstStartedPulling="2026-01-29 09:00:48.889710685 +0000 UTC m=+1190.531218831" lastFinishedPulling="2026-01-29 09:00:55.493406136 +0000 UTC m=+1197.134914282" observedRunningTime="2026-01-29 09:00:56.466682408 +0000 UTC m=+1198.108190554" watchObservedRunningTime="2026-01-29 09:00:56.467807588 +0000 UTC m=+1198.109315734" Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.791265 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.941220 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-dns-svc\") pod \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.941323 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4s77\" (UniqueName: \"kubernetes.io/projected/a9a8b6ea-9175-4086-98ad-a7b35af3798d-kube-api-access-j4s77\") pod \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.941379 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-config\") pod \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.941562 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-ovsdbserver-sb\") pod \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.941617 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-ovsdbserver-nb\") pod \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\" (UID: \"a9a8b6ea-9175-4086-98ad-a7b35af3798d\") " Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.958973 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a8b6ea-9175-4086-98ad-a7b35af3798d-kube-api-access-j4s77" (OuterVolumeSpecName: "kube-api-access-j4s77") pod "a9a8b6ea-9175-4086-98ad-a7b35af3798d" (UID: "a9a8b6ea-9175-4086-98ad-a7b35af3798d"). InnerVolumeSpecName "kube-api-access-j4s77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.989143 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a9a8b6ea-9175-4086-98ad-a7b35af3798d" (UID: "a9a8b6ea-9175-4086-98ad-a7b35af3798d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.992764 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-config" (OuterVolumeSpecName: "config") pod "a9a8b6ea-9175-4086-98ad-a7b35af3798d" (UID: "a9a8b6ea-9175-4086-98ad-a7b35af3798d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:56 crc kubenswrapper[4895]: I0129 09:00:56.998185 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a9a8b6ea-9175-4086-98ad-a7b35af3798d" (UID: "a9a8b6ea-9175-4086-98ad-a7b35af3798d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:57 crc kubenswrapper[4895]: I0129 09:00:57.000732 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9a8b6ea-9175-4086-98ad-a7b35af3798d" (UID: "a9a8b6ea-9175-4086-98ad-a7b35af3798d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:57 crc kubenswrapper[4895]: I0129 09:00:57.044240 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:57 crc kubenswrapper[4895]: I0129 09:00:57.044606 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4s77\" (UniqueName: \"kubernetes.io/projected/a9a8b6ea-9175-4086-98ad-a7b35af3798d-kube-api-access-j4s77\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:57 crc kubenswrapper[4895]: I0129 09:00:57.044800 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:57 crc kubenswrapper[4895]: I0129 09:00:57.045076 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:57 crc kubenswrapper[4895]: I0129 09:00:57.045176 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a8b6ea-9175-4086-98ad-a7b35af3798d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:57 crc kubenswrapper[4895]: I0129 09:00:57.473029 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" event={"ID":"a9a8b6ea-9175-4086-98ad-a7b35af3798d","Type":"ContainerDied","Data":"36e92f0bc576980a30bd2bee23cc12c3294012b3df43607591269751838c27fd"} Jan 29 09:00:57 crc kubenswrapper[4895]: I0129 09:00:57.473159 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-g84tq" Jan 29 09:00:57 crc kubenswrapper[4895]: I0129 09:00:57.474847 4895 scope.go:117] "RemoveContainer" containerID="dc85c6f26ebe5257c5eed4f46629e4041f4e9d089296e541d79b90dd36eea35a" Jan 29 09:00:57 crc kubenswrapper[4895]: I0129 09:00:57.512041 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g84tq"] Jan 29 09:00:57 crc kubenswrapper[4895]: I0129 09:00:57.523776 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g84tq"] Jan 29 09:00:57 crc kubenswrapper[4895]: I0129 09:00:57.527439 4895 scope.go:117] "RemoveContainer" containerID="57544b7ab707073235f4ee815977b2bce505383b1e3f0888b3ef28b8b7ed9444" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.149906 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.275621 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-combined-ca-bundle\") pod \"a7e86824-b384-45ea-b4bb-946f795bc9c5\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.275774 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-config-data\") pod \"a7e86824-b384-45ea-b4bb-946f795bc9c5\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.276039 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvprn\" (UniqueName: \"kubernetes.io/projected/a7e86824-b384-45ea-b4bb-946f795bc9c5-kube-api-access-qvprn\") pod \"a7e86824-b384-45ea-b4bb-946f795bc9c5\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.276063 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-db-sync-config-data\") pod \"a7e86824-b384-45ea-b4bb-946f795bc9c5\" (UID: \"a7e86824-b384-45ea-b4bb-946f795bc9c5\") " Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.285414 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a7e86824-b384-45ea-b4bb-946f795bc9c5" (UID: "a7e86824-b384-45ea-b4bb-946f795bc9c5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.286225 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7e86824-b384-45ea-b4bb-946f795bc9c5-kube-api-access-qvprn" (OuterVolumeSpecName: "kube-api-access-qvprn") pod "a7e86824-b384-45ea-b4bb-946f795bc9c5" (UID: "a7e86824-b384-45ea-b4bb-946f795bc9c5"). InnerVolumeSpecName "kube-api-access-qvprn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.302329 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7e86824-b384-45ea-b4bb-946f795bc9c5" (UID: "a7e86824-b384-45ea-b4bb-946f795bc9c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.320402 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-config-data" (OuterVolumeSpecName: "config-data") pod "a7e86824-b384-45ea-b4bb-946f795bc9c5" (UID: "a7e86824-b384-45ea-b4bb-946f795bc9c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.378227 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvprn\" (UniqueName: \"kubernetes.io/projected/a7e86824-b384-45ea-b4bb-946f795bc9c5-kube-api-access-qvprn\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.378263 4895 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.378274 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.378284 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7e86824-b384-45ea-b4bb-946f795bc9c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.484023 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kchhm" event={"ID":"a7e86824-b384-45ea-b4bb-946f795bc9c5","Type":"ContainerDied","Data":"a8db93a23832116e787dc2f8f319816de9698fb6772d312fe9627c16a4e79396"} Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.484049 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kchhm" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.484068 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8db93a23832116e787dc2f8f319816de9698fb6772d312fe9627c16a4e79396" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.951102 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rgjc8"] Jan 29 09:00:58 crc kubenswrapper[4895]: E0129 09:00:58.953469 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e86824-b384-45ea-b4bb-946f795bc9c5" containerName="glance-db-sync" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.955040 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e86824-b384-45ea-b4bb-946f795bc9c5" containerName="glance-db-sync" Jan 29 09:00:58 crc kubenswrapper[4895]: E0129 09:00:58.955133 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a8b6ea-9175-4086-98ad-a7b35af3798d" containerName="dnsmasq-dns" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.955210 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a8b6ea-9175-4086-98ad-a7b35af3798d" containerName="dnsmasq-dns" Jan 29 09:00:58 crc kubenswrapper[4895]: E0129 09:00:58.955282 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d0ec49-1480-4452-a955-1ee612064f8a" containerName="mariadb-account-create-update" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.955343 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d0ec49-1480-4452-a955-1ee612064f8a" containerName="mariadb-account-create-update" Jan 29 09:00:58 crc kubenswrapper[4895]: E0129 09:00:58.955430 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9642f6-4325-49a2-bad8-71b3383cc5ca" containerName="mariadb-database-create" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.955496 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9642f6-4325-49a2-bad8-71b3383cc5ca" containerName="mariadb-database-create" Jan 29 09:00:58 crc kubenswrapper[4895]: E0129 09:00:58.955578 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be94e72-1174-4bf4-8706-387fef234ccb" containerName="mariadb-database-create" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.955651 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be94e72-1174-4bf4-8706-387fef234ccb" containerName="mariadb-database-create" Jan 29 09:00:58 crc kubenswrapper[4895]: E0129 09:00:58.955743 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ef183c-59b1-4d07-831f-db71d6f978b8" containerName="mariadb-database-create" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.955805 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ef183c-59b1-4d07-831f-db71d6f978b8" containerName="mariadb-database-create" Jan 29 09:00:58 crc kubenswrapper[4895]: E0129 09:00:58.955869 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a8b6ea-9175-4086-98ad-a7b35af3798d" containerName="init" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.955944 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a8b6ea-9175-4086-98ad-a7b35af3798d" containerName="init" Jan 29 09:00:58 crc kubenswrapper[4895]: E0129 09:00:58.956017 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cac20b0-8086-44e6-8ef4-cf184b849ee3" containerName="mariadb-account-create-update" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.956089 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cac20b0-8086-44e6-8ef4-cf184b849ee3" containerName="mariadb-account-create-update" Jan 29 09:00:58 crc kubenswrapper[4895]: E0129 09:00:58.956166 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb" containerName="mariadb-account-create-update" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.956229 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb" containerName="mariadb-account-create-update" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.956549 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a8b6ea-9175-4086-98ad-a7b35af3798d" containerName="dnsmasq-dns" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.956630 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="18ef183c-59b1-4d07-831f-db71d6f978b8" containerName="mariadb-database-create" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.956699 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e86824-b384-45ea-b4bb-946f795bc9c5" containerName="glance-db-sync" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.956776 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cac20b0-8086-44e6-8ef4-cf184b849ee3" containerName="mariadb-account-create-update" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.956840 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d0ec49-1480-4452-a955-1ee612064f8a" containerName="mariadb-account-create-update" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.956911 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb" containerName="mariadb-account-create-update" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.957014 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be94e72-1174-4bf4-8706-387fef234ccb" containerName="mariadb-database-create" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.957091 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd9642f6-4325-49a2-bad8-71b3383cc5ca" containerName="mariadb-database-create" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.958480 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:58 crc kubenswrapper[4895]: I0129 09:00:58.978346 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rgjc8"] Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.099221 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.099350 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.099412 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.099462 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlnqf\" (UniqueName: \"kubernetes.io/projected/e055a682-7ed9-4998-9611-37fface324e3-kube-api-access-zlnqf\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.099487 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-config\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.099534 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.201027 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.201174 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.201243 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.201283 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.201326 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlnqf\" (UniqueName: \"kubernetes.io/projected/e055a682-7ed9-4998-9611-37fface324e3-kube-api-access-zlnqf\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.201352 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-config\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.202179 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.202387 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.202639 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.202647 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-config\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.203255 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.227844 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlnqf\" (UniqueName: \"kubernetes.io/projected/e055a682-7ed9-4998-9611-37fface324e3-kube-api-access-zlnqf\") pod \"dnsmasq-dns-5f59b8f679-rgjc8\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.236843 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a8b6ea-9175-4086-98ad-a7b35af3798d" path="/var/lib/kubelet/pods/a9a8b6ea-9175-4086-98ad-a7b35af3798d/volumes" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.305272 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:00:59 crc kubenswrapper[4895]: I0129 09:00:59.815639 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rgjc8"] Jan 29 09:01:00 crc kubenswrapper[4895]: I0129 09:01:00.509140 4895 generic.go:334] "Generic (PLEG): container finished" podID="e055a682-7ed9-4998-9611-37fface324e3" containerID="91c347e086879584d54c6b720e8418f0ec212b7294b7556b33dcb796074af8b4" exitCode=0 Jan 29 09:01:00 crc kubenswrapper[4895]: I0129 09:01:00.509233 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" event={"ID":"e055a682-7ed9-4998-9611-37fface324e3","Type":"ContainerDied","Data":"91c347e086879584d54c6b720e8418f0ec212b7294b7556b33dcb796074af8b4"} Jan 29 09:01:00 crc kubenswrapper[4895]: I0129 09:01:00.509800 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" event={"ID":"e055a682-7ed9-4998-9611-37fface324e3","Type":"ContainerStarted","Data":"bc3ed34d354189d052abde591b281176710917418fa2cee8e605a4acc473059d"} Jan 29 09:01:01 crc kubenswrapper[4895]: I0129 09:01:01.520724 4895 generic.go:334] "Generic (PLEG): container finished" podID="bc7fffcb-533d-439c-ae0d-7b0bbe9d5480" containerID="36498f6b0caddda96dc7a8cc3b6b8e4c9ee7fb15d1a37d06719dc74b6f19ffc8" exitCode=0 Jan 29 09:01:01 crc kubenswrapper[4895]: I0129 09:01:01.520815 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sd8bs" event={"ID":"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480","Type":"ContainerDied","Data":"36498f6b0caddda96dc7a8cc3b6b8e4c9ee7fb15d1a37d06719dc74b6f19ffc8"} Jan 29 09:01:01 crc kubenswrapper[4895]: I0129 09:01:01.524304 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" event={"ID":"e055a682-7ed9-4998-9611-37fface324e3","Type":"ContainerStarted","Data":"30a09bbadd6f2d699220aff527cfb5f0889088b8e11c0d20dc73b28976a3fc6f"} Jan 29 09:01:01 crc kubenswrapper[4895]: I0129 09:01:01.524450 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:01:01 crc kubenswrapper[4895]: I0129 09:01:01.574775 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" podStartSLOduration=3.574747907 podStartE2EDuration="3.574747907s" podCreationTimestamp="2026-01-29 09:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:01.566170938 +0000 UTC m=+1203.207679084" watchObservedRunningTime="2026-01-29 09:01:01.574747907 +0000 UTC m=+1203.216256053" Jan 29 09:01:02 crc kubenswrapper[4895]: I0129 09:01:02.912698 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sd8bs" Jan 29 09:01:02 crc kubenswrapper[4895]: I0129 09:01:02.982662 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-config-data\") pod \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\" (UID: \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\") " Jan 29 09:01:02 crc kubenswrapper[4895]: I0129 09:01:02.982819 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-combined-ca-bundle\") pod \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\" (UID: \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\") " Jan 29 09:01:02 crc kubenswrapper[4895]: I0129 09:01:02.982935 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc64n\" (UniqueName: \"kubernetes.io/projected/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-kube-api-access-wc64n\") pod \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\" (UID: \"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480\") " Jan 29 09:01:02 crc kubenswrapper[4895]: I0129 09:01:02.998572 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-kube-api-access-wc64n" (OuterVolumeSpecName: "kube-api-access-wc64n") pod "bc7fffcb-533d-439c-ae0d-7b0bbe9d5480" (UID: "bc7fffcb-533d-439c-ae0d-7b0bbe9d5480"). InnerVolumeSpecName "kube-api-access-wc64n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.013619 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc7fffcb-533d-439c-ae0d-7b0bbe9d5480" (UID: "bc7fffcb-533d-439c-ae0d-7b0bbe9d5480"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.031307 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-config-data" (OuterVolumeSpecName: "config-data") pod "bc7fffcb-533d-439c-ae0d-7b0bbe9d5480" (UID: "bc7fffcb-533d-439c-ae0d-7b0bbe9d5480"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.111684 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.112029 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc64n\" (UniqueName: \"kubernetes.io/projected/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-kube-api-access-wc64n\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.112100 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.543209 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sd8bs" event={"ID":"bc7fffcb-533d-439c-ae0d-7b0bbe9d5480","Type":"ContainerDied","Data":"9f5ca5776822324aebb9c5318c9e409c1687dc7f2b32bb87f2207dbeba7d605b"} Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.543278 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f5ca5776822324aebb9c5318c9e409c1687dc7f2b32bb87f2207dbeba7d605b" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.543289 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sd8bs" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.835896 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rgjc8"] Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.836669 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" podUID="e055a682-7ed9-4998-9611-37fface324e3" containerName="dnsmasq-dns" containerID="cri-o://30a09bbadd6f2d699220aff527cfb5f0889088b8e11c0d20dc73b28976a3fc6f" gracePeriod=10 Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.846526 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-s8csf"] Jan 29 09:01:03 crc kubenswrapper[4895]: E0129 09:01:03.847080 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7fffcb-533d-439c-ae0d-7b0bbe9d5480" containerName="keystone-db-sync" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.847103 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7fffcb-533d-439c-ae0d-7b0bbe9d5480" containerName="keystone-db-sync" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.847302 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7fffcb-533d-439c-ae0d-7b0bbe9d5480" containerName="keystone-db-sync" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.851172 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.856320 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.856347 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.859644 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.863477 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.863712 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2twgr" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.866309 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s8csf"] Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.931356 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-s8j24"] Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.933707 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.937875 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.937945 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-scripts\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.937976 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-config-data\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.938075 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-combined-ca-bundle\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.938107 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.938128 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-credential-keys\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.938148 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-config\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.938167 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.938185 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hnwx\" (UniqueName: \"kubernetes.io/projected/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-kube-api-access-5hnwx\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.938232 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnfhs\" (UniqueName: \"kubernetes.io/projected/3ca4002e-1bb2-40f5-861d-832fb18bb239-kube-api-access-gnfhs\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.938251 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-fernet-keys\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.938306 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:03 crc kubenswrapper[4895]: I0129 09:01:03.952319 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-s8j24"] Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.039185 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-combined-ca-bundle\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.039262 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.039291 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-credential-keys\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.039317 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.039337 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-config\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.039357 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hnwx\" (UniqueName: \"kubernetes.io/projected/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-kube-api-access-5hnwx\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.039403 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnfhs\" (UniqueName: \"kubernetes.io/projected/3ca4002e-1bb2-40f5-861d-832fb18bb239-kube-api-access-gnfhs\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.039429 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-fernet-keys\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.039461 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.039513 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.039546 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-scripts\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.039571 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-config-data\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.046280 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-config\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.046313 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.047321 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.050085 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.055482 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.063906 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-config-data\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.067981 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-fernet-keys\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.069016 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-scripts\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.069736 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-credential-keys\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.101899 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-combined-ca-bundle\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.223724 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hnwx\" (UniqueName: \"kubernetes.io/projected/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-kube-api-access-5hnwx\") pod \"keystone-bootstrap-s8csf\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.225820 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnfhs\" (UniqueName: \"kubernetes.io/projected/3ca4002e-1bb2-40f5-861d-832fb18bb239-kube-api-access-gnfhs\") pod \"dnsmasq-dns-bbf5cc879-s8j24\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.293999 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-xbr7n"] Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.295649 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-xbr7n" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.337813 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-xbr7n"] Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.410357 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-0f17-account-create-update-htqfc"] Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.420344 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-0f17-account-create-update-htqfc" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.427262 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.439073 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.444745 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-0f17-account-create-update-htqfc"] Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.449577 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4fsj\" (UniqueName: \"kubernetes.io/projected/237a1a8f-6944-49c1-bd88-805e164ef454-kube-api-access-h4fsj\") pod \"ironic-db-create-xbr7n\" (UID: \"237a1a8f-6944-49c1-bd88-805e164ef454\") " pod="openstack/ironic-db-create-xbr7n" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.449646 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/237a1a8f-6944-49c1-bd88-805e164ef454-operator-scripts\") pod \"ironic-db-create-xbr7n\" (UID: \"237a1a8f-6944-49c1-bd88-805e164ef454\") " pod="openstack/ironic-db-create-xbr7n" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.476982 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.554280 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42jh7\" (UniqueName: \"kubernetes.io/projected/6622d76d-c899-462e-a3ee-137c25e2a9ad-kube-api-access-42jh7\") pod \"ironic-0f17-account-create-update-htqfc\" (UID: \"6622d76d-c899-462e-a3ee-137c25e2a9ad\") " pod="openstack/ironic-0f17-account-create-update-htqfc" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.554793 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-xqv89"] Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.554869 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6622d76d-c899-462e-a3ee-137c25e2a9ad-operator-scripts\") pod \"ironic-0f17-account-create-update-htqfc\" (UID: \"6622d76d-c899-462e-a3ee-137c25e2a9ad\") " pod="openstack/ironic-0f17-account-create-update-htqfc" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.554909 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4fsj\" (UniqueName: \"kubernetes.io/projected/237a1a8f-6944-49c1-bd88-805e164ef454-kube-api-access-h4fsj\") pod \"ironic-db-create-xbr7n\" (UID: \"237a1a8f-6944-49c1-bd88-805e164ef454\") " pod="openstack/ironic-db-create-xbr7n" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.554986 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/237a1a8f-6944-49c1-bd88-805e164ef454-operator-scripts\") pod \"ironic-db-create-xbr7n\" (UID: \"237a1a8f-6944-49c1-bd88-805e164ef454\") " pod="openstack/ironic-db-create-xbr7n" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.556179 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.562550 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6sgwh" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.566960 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.581148 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/237a1a8f-6944-49c1-bd88-805e164ef454-operator-scripts\") pod \"ironic-db-create-xbr7n\" (UID: \"237a1a8f-6944-49c1-bd88-805e164ef454\") " pod="openstack/ironic-db-create-xbr7n" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.583497 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.589386 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4fsj\" (UniqueName: \"kubernetes.io/projected/237a1a8f-6944-49c1-bd88-805e164ef454-kube-api-access-h4fsj\") pod \"ironic-db-create-xbr7n\" (UID: \"237a1a8f-6944-49c1-bd88-805e164ef454\") " pod="openstack/ironic-db-create-xbr7n" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.602837 4895 generic.go:334] "Generic (PLEG): container finished" podID="e055a682-7ed9-4998-9611-37fface324e3" containerID="30a09bbadd6f2d699220aff527cfb5f0889088b8e11c0d20dc73b28976a3fc6f" exitCode=0 Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.602933 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" event={"ID":"e055a682-7ed9-4998-9611-37fface324e3","Type":"ContainerDied","Data":"30a09bbadd6f2d699220aff527cfb5f0889088b8e11c0d20dc73b28976a3fc6f"} Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.606665 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-qd4f9"] Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.608178 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qd4f9" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.613444 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vmp9g" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.613711 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.642162 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-95b5h"] Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.643727 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-95b5h" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.657105 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qd4f9"] Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.658757 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6622d76d-c899-462e-a3ee-137c25e2a9ad-operator-scripts\") pod \"ironic-0f17-account-create-update-htqfc\" (UID: \"6622d76d-c899-462e-a3ee-137c25e2a9ad\") " pod="openstack/ironic-0f17-account-create-update-htqfc" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.658882 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42jh7\" (UniqueName: \"kubernetes.io/projected/6622d76d-c899-462e-a3ee-137c25e2a9ad-kube-api-access-42jh7\") pod \"ironic-0f17-account-create-update-htqfc\" (UID: \"6622d76d-c899-462e-a3ee-137c25e2a9ad\") " pod="openstack/ironic-0f17-account-create-update-htqfc" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.659222 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-xbr7n" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.659565 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.659818 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-765t2" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.659990 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.660441 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6622d76d-c899-462e-a3ee-137c25e2a9ad-operator-scripts\") pod \"ironic-0f17-account-create-update-htqfc\" (UID: \"6622d76d-c899-462e-a3ee-137c25e2a9ad\") " pod="openstack/ironic-0f17-account-create-update-htqfc" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.694757 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-95b5h"] Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.713715 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xqv89"] Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.723184 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42jh7\" (UniqueName: \"kubernetes.io/projected/6622d76d-c899-462e-a3ee-137c25e2a9ad-kube-api-access-42jh7\") pod \"ironic-0f17-account-create-update-htqfc\" (UID: \"6622d76d-c899-462e-a3ee-137c25e2a9ad\") " pod="openstack/ironic-0f17-account-create-update-htqfc" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.751575 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-0f17-account-create-update-htqfc" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.767520 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-db-sync-config-data\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.767591 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75739842-3c97-4e9a-b13b-4fb5929461b8-config\") pod \"neutron-db-sync-95b5h\" (UID: \"75739842-3c97-4e9a-b13b-4fb5929461b8\") " pod="openstack/neutron-db-sync-95b5h" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.767614 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-scripts\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.767632 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-config-data\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.767656 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5x5k\" (UniqueName: \"kubernetes.io/projected/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-kube-api-access-x5x5k\") pod \"barbican-db-sync-qd4f9\" (UID: \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\") " pod="openstack/barbican-db-sync-qd4f9" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.767703 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-combined-ca-bundle\") pod \"barbican-db-sync-qd4f9\" (UID: \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\") " pod="openstack/barbican-db-sync-qd4f9" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.767738 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-db-sync-config-data\") pod \"barbican-db-sync-qd4f9\" (UID: \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\") " pod="openstack/barbican-db-sync-qd4f9" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.767762 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-combined-ca-bundle\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.767782 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5nz8\" (UniqueName: \"kubernetes.io/projected/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-kube-api-access-d5nz8\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.767809 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75739842-3c97-4e9a-b13b-4fb5929461b8-combined-ca-bundle\") pod \"neutron-db-sync-95b5h\" (UID: \"75739842-3c97-4e9a-b13b-4fb5929461b8\") " pod="openstack/neutron-db-sync-95b5h" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.767840 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-etc-machine-id\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.767868 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjmnj\" (UniqueName: \"kubernetes.io/projected/75739842-3c97-4e9a-b13b-4fb5929461b8-kube-api-access-kjmnj\") pod \"neutron-db-sync-95b5h\" (UID: \"75739842-3c97-4e9a-b13b-4fb5929461b8\") " pod="openstack/neutron-db-sync-95b5h" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.877602 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-combined-ca-bundle\") pod \"barbican-db-sync-qd4f9\" (UID: \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\") " pod="openstack/barbican-db-sync-qd4f9" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.877726 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-db-sync-config-data\") pod \"barbican-db-sync-qd4f9\" (UID: \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\") " pod="openstack/barbican-db-sync-qd4f9" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.877770 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-combined-ca-bundle\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.877802 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5nz8\" (UniqueName: \"kubernetes.io/projected/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-kube-api-access-d5nz8\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.877862 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75739842-3c97-4e9a-b13b-4fb5929461b8-combined-ca-bundle\") pod \"neutron-db-sync-95b5h\" (UID: \"75739842-3c97-4e9a-b13b-4fb5929461b8\") " pod="openstack/neutron-db-sync-95b5h" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.877952 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-etc-machine-id\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.878002 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjmnj\" (UniqueName: \"kubernetes.io/projected/75739842-3c97-4e9a-b13b-4fb5929461b8-kube-api-access-kjmnj\") pod \"neutron-db-sync-95b5h\" (UID: \"75739842-3c97-4e9a-b13b-4fb5929461b8\") " pod="openstack/neutron-db-sync-95b5h" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.878097 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-db-sync-config-data\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.878257 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75739842-3c97-4e9a-b13b-4fb5929461b8-config\") pod \"neutron-db-sync-95b5h\" (UID: \"75739842-3c97-4e9a-b13b-4fb5929461b8\") " pod="openstack/neutron-db-sync-95b5h" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.878298 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-scripts\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.878341 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-config-data\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.878396 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5x5k\" (UniqueName: \"kubernetes.io/projected/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-kube-api-access-x5x5k\") pod \"barbican-db-sync-qd4f9\" (UID: \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\") " pod="openstack/barbican-db-sync-qd4f9" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.880306 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-etc-machine-id\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.913654 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-config-data\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.937292 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5nz8\" (UniqueName: \"kubernetes.io/projected/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-kube-api-access-d5nz8\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.939320 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-combined-ca-bundle\") pod \"barbican-db-sync-qd4f9\" (UID: \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\") " pod="openstack/barbican-db-sync-qd4f9" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.946163 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75739842-3c97-4e9a-b13b-4fb5929461b8-combined-ca-bundle\") pod \"neutron-db-sync-95b5h\" (UID: \"75739842-3c97-4e9a-b13b-4fb5929461b8\") " pod="openstack/neutron-db-sync-95b5h" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.946614 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/75739842-3c97-4e9a-b13b-4fb5929461b8-config\") pod \"neutron-db-sync-95b5h\" (UID: \"75739842-3c97-4e9a-b13b-4fb5929461b8\") " pod="openstack/neutron-db-sync-95b5h" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.940724 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-combined-ca-bundle\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.947550 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-db-sync-config-data\") pod \"barbican-db-sync-qd4f9\" (UID: \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\") " pod="openstack/barbican-db-sync-qd4f9" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.962085 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-scripts\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:04 crc kubenswrapper[4895]: I0129 09:01:04.984739 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5x5k\" (UniqueName: \"kubernetes.io/projected/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-kube-api-access-x5x5k\") pod \"barbican-db-sync-qd4f9\" (UID: \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\") " pod="openstack/barbican-db-sync-qd4f9" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.002786 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-db-sync-config-data\") pod \"cinder-db-sync-xqv89\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.005202 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjmnj\" (UniqueName: \"kubernetes.io/projected/75739842-3c97-4e9a-b13b-4fb5929461b8-kube-api-access-kjmnj\") pod \"neutron-db-sync-95b5h\" (UID: \"75739842-3c97-4e9a-b13b-4fb5929461b8\") " pod="openstack/neutron-db-sync-95b5h" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.026066 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-s8j24"] Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.026162 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qd4f9" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.041282 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-95b5h" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.044746 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-jlpfn"] Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.046392 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.052033 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.052328 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.052492 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6v7bg" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.058418 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jlpfn"] Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.079926 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zm6xl"] Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.081974 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.127216 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zm6xl"] Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.143007 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.180471 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:01:05 crc kubenswrapper[4895]: E0129 09:01:05.180991 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e055a682-7ed9-4998-9611-37fface324e3" containerName="dnsmasq-dns" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.181006 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e055a682-7ed9-4998-9611-37fface324e3" containerName="dnsmasq-dns" Jan 29 09:01:05 crc kubenswrapper[4895]: E0129 09:01:05.181022 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e055a682-7ed9-4998-9611-37fface324e3" containerName="init" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.181028 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e055a682-7ed9-4998-9611-37fface324e3" containerName="init" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.181214 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e055a682-7ed9-4998-9611-37fface324e3" containerName="dnsmasq-dns" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.187283 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.194751 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.195075 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.229784 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlnqf\" (UniqueName: \"kubernetes.io/projected/e055a682-7ed9-4998-9611-37fface324e3-kube-api-access-zlnqf\") pod \"e055a682-7ed9-4998-9611-37fface324e3\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.229861 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-dns-swift-storage-0\") pod \"e055a682-7ed9-4998-9611-37fface324e3\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.229935 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-dns-svc\") pod \"e055a682-7ed9-4998-9611-37fface324e3\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.229995 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-config\") pod \"e055a682-7ed9-4998-9611-37fface324e3\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.230058 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-ovsdbserver-sb\") pod \"e055a682-7ed9-4998-9611-37fface324e3\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.230092 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-ovsdbserver-nb\") pod \"e055a682-7ed9-4998-9611-37fface324e3\" (UID: \"e055a682-7ed9-4998-9611-37fface324e3\") " Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.230412 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-scripts\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.230506 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.230524 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86788\" (UniqueName: \"kubernetes.io/projected/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-kube-api-access-86788\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.230556 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-logs\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.230574 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-config\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.230591 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.230619 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-combined-ca-bundle\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.230635 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.230653 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-config-data\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.230698 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7w5c\" (UniqueName: \"kubernetes.io/projected/188f6bfb-7531-44aa-890b-6658e39aa184-kube-api-access-k7w5c\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.230727 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.245466 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e055a682-7ed9-4998-9611-37fface324e3-kube-api-access-zlnqf" (OuterVolumeSpecName: "kube-api-access-zlnqf") pod "e055a682-7ed9-4998-9611-37fface324e3" (UID: "e055a682-7ed9-4998-9611-37fface324e3"). InnerVolumeSpecName "kube-api-access-zlnqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.269729 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xqv89" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.311505 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.323999 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-config" (OuterVolumeSpecName: "config") pod "e055a682-7ed9-4998-9611-37fface324e3" (UID: "e055a682-7ed9-4998-9611-37fface324e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332282 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332349 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332382 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-scripts\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332447 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xsl4\" (UniqueName: \"kubernetes.io/projected/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-kube-api-access-4xsl4\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332473 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-run-httpd\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332500 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-scripts\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332528 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332569 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332599 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86788\" (UniqueName: \"kubernetes.io/projected/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-kube-api-access-86788\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332623 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-log-httpd\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332664 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-logs\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332693 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-config\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332714 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332761 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-combined-ca-bundle\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332788 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332810 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-config-data\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332837 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-config-data\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.332908 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7w5c\" (UniqueName: \"kubernetes.io/projected/188f6bfb-7531-44aa-890b-6658e39aa184-kube-api-access-k7w5c\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.333117 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlnqf\" (UniqueName: \"kubernetes.io/projected/e055a682-7ed9-4998-9611-37fface324e3-kube-api-access-zlnqf\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.334140 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.334181 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.334584 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-config\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.335851 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-logs\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.337097 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.337693 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.338095 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.340027 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-scripts\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.340623 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-combined-ca-bundle\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.348329 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-config-data\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.347445 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e055a682-7ed9-4998-9611-37fface324e3" (UID: "e055a682-7ed9-4998-9611-37fface324e3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.371522 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7w5c\" (UniqueName: \"kubernetes.io/projected/188f6bfb-7531-44aa-890b-6658e39aa184-kube-api-access-k7w5c\") pod \"dnsmasq-dns-56df8fb6b7-zm6xl\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.392791 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86788\" (UniqueName: \"kubernetes.io/projected/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-kube-api-access-86788\") pod \"placement-db-sync-jlpfn\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.416004 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.419656 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.428254 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-brtlg" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.428464 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.428673 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.428522 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.432831 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.436545 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xsl4\" (UniqueName: \"kubernetes.io/projected/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-kube-api-access-4xsl4\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.436615 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-run-httpd\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.436672 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-scripts\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.436719 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.436780 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-log-httpd\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.436947 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-config-data\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.437011 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.437111 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.438230 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-log-httpd\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.438219 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-run-httpd\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.445017 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.445018 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.449338 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-config-data\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.451889 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-scripts\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.483041 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xsl4\" (UniqueName: \"kubernetes.io/projected/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-kube-api-access-4xsl4\") pod \"ceilometer-0\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.541174 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.541252 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.541329 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-logs\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.541355 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.541376 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.541395 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.541415 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.541460 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjkcf\" (UniqueName: \"kubernetes.io/projected/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-kube-api-access-gjkcf\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.555473 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e055a682-7ed9-4998-9611-37fface324e3" (UID: "e055a682-7ed9-4998-9611-37fface324e3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.557626 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e055a682-7ed9-4998-9611-37fface324e3" (UID: "e055a682-7ed9-4998-9611-37fface324e3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.590863 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.592492 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.601616 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.603682 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.607227 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e055a682-7ed9-4998-9611-37fface324e3" (UID: "e055a682-7ed9-4998-9611-37fface324e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.612378 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.630664 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" event={"ID":"e055a682-7ed9-4998-9611-37fface324e3","Type":"ContainerDied","Data":"bc3ed34d354189d052abde591b281176710917418fa2cee8e605a4acc473059d"} Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.631278 4895 scope.go:117] "RemoveContainer" containerID="30a09bbadd6f2d699220aff527cfb5f0889088b8e11c0d20dc73b28976a3fc6f" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.631511 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rgjc8" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.640819 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.643394 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.645328 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.645399 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.645457 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-logs\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.645484 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.645507 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.645527 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.645549 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.645588 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjkcf\" (UniqueName: \"kubernetes.io/projected/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-kube-api-access-gjkcf\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.645719 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.645733 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.645747 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e055a682-7ed9-4998-9611-37fface324e3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.658655 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.663989 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.682381 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.683515 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-logs\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.706292 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.709854 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjkcf\" (UniqueName: \"kubernetes.io/projected/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-kube-api-access-gjkcf\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.714650 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.714654 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.716303 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.747868 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0736f67-dad4-4550-8654-9de2217a8750-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.748037 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.748074 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0736f67-dad4-4550-8654-9de2217a8750-logs\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.748118 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwmr5\" (UniqueName: \"kubernetes.io/projected/e0736f67-dad4-4550-8654-9de2217a8750-kube-api-access-jwmr5\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.748154 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.748176 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.748251 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-config-data\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.748294 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-scripts\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.758089 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.822560 4895 scope.go:117] "RemoveContainer" containerID="91c347e086879584d54c6b720e8418f0ec212b7294b7556b33dcb796074af8b4" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.835822 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rgjc8"] Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.852953 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.853020 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0736f67-dad4-4550-8654-9de2217a8750-logs\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.853087 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwmr5\" (UniqueName: \"kubernetes.io/projected/e0736f67-dad4-4550-8654-9de2217a8750-kube-api-access-jwmr5\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.853114 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.853135 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.853209 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-config-data\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.853738 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-scripts\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.854107 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.857274 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0736f67-dad4-4550-8654-9de2217a8750-logs\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.864124 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0736f67-dad4-4550-8654-9de2217a8750-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.866383 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0736f67-dad4-4550-8654-9de2217a8750-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.870548 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.875609 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rgjc8"] Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.885643 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.887877 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwmr5\" (UniqueName: \"kubernetes.io/projected/e0736f67-dad4-4550-8654-9de2217a8750-kube-api-access-jwmr5\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.887945 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-scripts\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.907980 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-config-data\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:05 crc kubenswrapper[4895]: I0129 09:01:05.919259 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.049232 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.062029 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s8csf"] Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.098763 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-s8j24"] Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.103648 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.444473 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-xbr7n"] Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.490596 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-0f17-account-create-update-htqfc"] Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.557251 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xqv89"] Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.578097 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qd4f9"] Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.597281 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-95b5h"] Jan 29 09:01:06 crc kubenswrapper[4895]: W0129 09:01:06.632414 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1077fb9f_c6a3_416d_a7c9_011dd8954ab1.slice/crio-b1062d78810e03831675a6c32079a45db150ddd7846a5aa758c8461af873b144 WatchSource:0}: Error finding container b1062d78810e03831675a6c32079a45db150ddd7846a5aa758c8461af873b144: Status 404 returned error can't find the container with id b1062d78810e03831675a6c32079a45db150ddd7846a5aa758c8461af873b144 Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.661764 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" event={"ID":"3ca4002e-1bb2-40f5-861d-832fb18bb239","Type":"ContainerStarted","Data":"c4bfba3620a76000170d9cc5d373606555d8b7b1f84d1107c1a399264c492531"} Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.664047 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-0f17-account-create-update-htqfc" event={"ID":"6622d76d-c899-462e-a3ee-137c25e2a9ad","Type":"ContainerStarted","Data":"7b3276aa1fad2c65f8d207c455b11f8f7879b4bacb1e11541d1b27e9a7566fb5"} Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.669625 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s8csf" event={"ID":"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2","Type":"ContainerStarted","Data":"01112b4ca8253d6cebd5d281ced6a739d13c93332d678dedfe7ff9541f3e090d"} Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.669690 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s8csf" event={"ID":"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2","Type":"ContainerStarted","Data":"8f0683fbbb9de4fb6a5e402cc229212eba89866482bd10ff0bbbdc5bbce0aea4"} Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.697810 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-s8csf" podStartSLOduration=3.697768318 podStartE2EDuration="3.697768318s" podCreationTimestamp="2026-01-29 09:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:06.692631601 +0000 UTC m=+1208.334139757" watchObservedRunningTime="2026-01-29 09:01:06.697768318 +0000 UTC m=+1208.339276464" Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.698571 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-xbr7n" event={"ID":"237a1a8f-6944-49c1-bd88-805e164ef454","Type":"ContainerStarted","Data":"e46ab834be9843fd1a7ef21f0daf9895f068ec97ed15bf145805a80cfb3ca3bf"} Jan 29 09:01:06 crc kubenswrapper[4895]: W0129 09:01:06.721231 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3fc9317_29e0_4b1c_b598_5d95fc98a1e7.slice/crio-e6b843be6d953836cf0f643f9591bd713359b27fb37ac11751bfc6711c491a4b WatchSource:0}: Error finding container e6b843be6d953836cf0f643f9591bd713359b27fb37ac11751bfc6711c491a4b: Status 404 returned error can't find the container with id e6b843be6d953836cf0f643f9591bd713359b27fb37ac11751bfc6711c491a4b Jan 29 09:01:06 crc kubenswrapper[4895]: I0129 09:01:06.986961 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.003829 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zm6xl"] Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.023664 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jlpfn"] Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.247168 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e055a682-7ed9-4998-9611-37fface324e3" path="/var/lib/kubelet/pods/e055a682-7ed9-4998-9611-37fface324e3/volumes" Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.331015 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.365867 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.428706 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.457825 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.521806 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.729699 4895 generic.go:334] "Generic (PLEG): container finished" podID="188f6bfb-7531-44aa-890b-6658e39aa184" containerID="17c9aecdf5b0da00b14a1b5b0bcce0e348f674a638b93e2765dd676f506f6f01" exitCode=0 Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.729815 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" event={"ID":"188f6bfb-7531-44aa-890b-6658e39aa184","Type":"ContainerDied","Data":"17c9aecdf5b0da00b14a1b5b0bcce0e348f674a638b93e2765dd676f506f6f01"} Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.729852 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" event={"ID":"188f6bfb-7531-44aa-890b-6658e39aa184","Type":"ContainerStarted","Data":"5fa9d43471dd37c44c39b590ba17138025c273cd3ede6549c5dd9cde8fc8653f"} Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.755663 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jlpfn" event={"ID":"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa","Type":"ContainerStarted","Data":"b050aea8ad48ed95fa103f7761ddf96e964713b5f7fea35ce1f4c658667c2fdb"} Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.757799 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0736f67-dad4-4550-8654-9de2217a8750","Type":"ContainerStarted","Data":"992103af01f6e1420ec71f1ce5996b75f4082dc67378b06f8e62bb395174e453"} Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.759404 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee","Type":"ContainerStarted","Data":"4ec978bb0a1cd35ff59acbefb9e06054f86e15eb21ff8933d8bd3b2d4dc1acd2"} Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.778164 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xqv89" event={"ID":"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7","Type":"ContainerStarted","Data":"e6b843be6d953836cf0f643f9591bd713359b27fb37ac11751bfc6711c491a4b"} Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.783010 4895 generic.go:334] "Generic (PLEG): container finished" podID="3ca4002e-1bb2-40f5-861d-832fb18bb239" containerID="ed4e40c3895f9d5127dd7ecaf714e4efac0fd0f36ebdaecb1ffa9ddb1c5f518e" exitCode=0 Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.783105 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" event={"ID":"3ca4002e-1bb2-40f5-861d-832fb18bb239","Type":"ContainerDied","Data":"ed4e40c3895f9d5127dd7ecaf714e4efac0fd0f36ebdaecb1ffa9ddb1c5f518e"} Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.803154 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6","Type":"ContainerStarted","Data":"b190271b89f4cf4d523ef63f034d534e27785a469acd08b993c08d5f06c87505"} Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.819561 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-95b5h" event={"ID":"75739842-3c97-4e9a-b13b-4fb5929461b8","Type":"ContainerStarted","Data":"4037c578807da950a84229413bdc56f16362aef531973ed7facc667f91bf8152"} Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.819650 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-95b5h" event={"ID":"75739842-3c97-4e9a-b13b-4fb5929461b8","Type":"ContainerStarted","Data":"5b720908c08c6d3ed0e1bc1215b4a5f9780d9802dbaeb1d9a8beb3de1722b740"} Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.826049 4895 generic.go:334] "Generic (PLEG): container finished" podID="6622d76d-c899-462e-a3ee-137c25e2a9ad" containerID="be5eeb198159a2c69759021e019939a0140938c65c6186369aa27ac55d914e6a" exitCode=0 Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.826181 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-0f17-account-create-update-htqfc" event={"ID":"6622d76d-c899-462e-a3ee-137c25e2a9ad","Type":"ContainerDied","Data":"be5eeb198159a2c69759021e019939a0140938c65c6186369aa27ac55d914e6a"} Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.834181 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qd4f9" event={"ID":"1077fb9f-c6a3-416d-a7c9-011dd8954ab1","Type":"ContainerStarted","Data":"b1062d78810e03831675a6c32079a45db150ddd7846a5aa758c8461af873b144"} Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.855775 4895 generic.go:334] "Generic (PLEG): container finished" podID="237a1a8f-6944-49c1-bd88-805e164ef454" containerID="54af0152ce7cfe098d9f2b30dd007a8f430e16751b834941ccc4c44c323a5eaa" exitCode=0 Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.856525 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-xbr7n" event={"ID":"237a1a8f-6944-49c1-bd88-805e164ef454","Type":"ContainerDied","Data":"54af0152ce7cfe098d9f2b30dd007a8f430e16751b834941ccc4c44c323a5eaa"} Jan 29 09:01:07 crc kubenswrapper[4895]: I0129 09:01:07.859797 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-95b5h" podStartSLOduration=3.8597631740000002 podStartE2EDuration="3.859763174s" podCreationTimestamp="2026-01-29 09:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:07.851099282 +0000 UTC m=+1209.492607438" watchObservedRunningTime="2026-01-29 09:01:07.859763174 +0000 UTC m=+1209.501271320" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.398339 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.552656 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-ovsdbserver-nb\") pod \"3ca4002e-1bb2-40f5-861d-832fb18bb239\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.552766 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-config\") pod \"3ca4002e-1bb2-40f5-861d-832fb18bb239\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.552801 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnfhs\" (UniqueName: \"kubernetes.io/projected/3ca4002e-1bb2-40f5-861d-832fb18bb239-kube-api-access-gnfhs\") pod \"3ca4002e-1bb2-40f5-861d-832fb18bb239\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.552947 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-dns-svc\") pod \"3ca4002e-1bb2-40f5-861d-832fb18bb239\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.553031 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-dns-swift-storage-0\") pod \"3ca4002e-1bb2-40f5-861d-832fb18bb239\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.553138 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-ovsdbserver-sb\") pod \"3ca4002e-1bb2-40f5-861d-832fb18bb239\" (UID: \"3ca4002e-1bb2-40f5-861d-832fb18bb239\") " Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.571200 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ca4002e-1bb2-40f5-861d-832fb18bb239-kube-api-access-gnfhs" (OuterVolumeSpecName: "kube-api-access-gnfhs") pod "3ca4002e-1bb2-40f5-861d-832fb18bb239" (UID: "3ca4002e-1bb2-40f5-861d-832fb18bb239"). InnerVolumeSpecName "kube-api-access-gnfhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.635659 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ca4002e-1bb2-40f5-861d-832fb18bb239" (UID: "3ca4002e-1bb2-40f5-861d-832fb18bb239"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.636407 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ca4002e-1bb2-40f5-861d-832fb18bb239" (UID: "3ca4002e-1bb2-40f5-861d-832fb18bb239"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.636712 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3ca4002e-1bb2-40f5-861d-832fb18bb239" (UID: "3ca4002e-1bb2-40f5-861d-832fb18bb239"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.637196 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ca4002e-1bb2-40f5-861d-832fb18bb239" (UID: "3ca4002e-1bb2-40f5-861d-832fb18bb239"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.637966 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-config" (OuterVolumeSpecName: "config") pod "3ca4002e-1bb2-40f5-861d-832fb18bb239" (UID: "3ca4002e-1bb2-40f5-861d-832fb18bb239"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.666058 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.666102 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.666115 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.666149 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.666160 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnfhs\" (UniqueName: \"kubernetes.io/projected/3ca4002e-1bb2-40f5-861d-832fb18bb239-kube-api-access-gnfhs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.666171 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ca4002e-1bb2-40f5-861d-832fb18bb239-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.886795 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" event={"ID":"188f6bfb-7531-44aa-890b-6658e39aa184","Type":"ContainerStarted","Data":"f392274ee669920d8a90e3d21da83c3dcd76a174babb73fe75efa8daed63bdee"} Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.887413 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.892736 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.893278 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-s8j24" event={"ID":"3ca4002e-1bb2-40f5-861d-832fb18bb239","Type":"ContainerDied","Data":"c4bfba3620a76000170d9cc5d373606555d8b7b1f84d1107c1a399264c492531"} Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.893380 4895 scope.go:117] "RemoveContainer" containerID="ed4e40c3895f9d5127dd7ecaf714e4efac0fd0f36ebdaecb1ffa9ddb1c5f518e" Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.907639 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6","Type":"ContainerStarted","Data":"17b532f4b7d80ebe5c0108a8feda5d12f86e49a2342dbc55346dd54442da9232"} Jan 29 09:01:08 crc kubenswrapper[4895]: I0129 09:01:08.921012 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" podStartSLOduration=4.920984816 podStartE2EDuration="4.920984816s" podCreationTimestamp="2026-01-29 09:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:08.915489839 +0000 UTC m=+1210.556997985" watchObservedRunningTime="2026-01-29 09:01:08.920984816 +0000 UTC m=+1210.562492962" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.005793 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-s8j24"] Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.016334 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-s8j24"] Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.312730 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ca4002e-1bb2-40f5-861d-832fb18bb239" path="/var/lib/kubelet/pods/3ca4002e-1bb2-40f5-861d-832fb18bb239/volumes" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.438558 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-xbr7n" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.495456 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4fsj\" (UniqueName: \"kubernetes.io/projected/237a1a8f-6944-49c1-bd88-805e164ef454-kube-api-access-h4fsj\") pod \"237a1a8f-6944-49c1-bd88-805e164ef454\" (UID: \"237a1a8f-6944-49c1-bd88-805e164ef454\") " Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.495687 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/237a1a8f-6944-49c1-bd88-805e164ef454-operator-scripts\") pod \"237a1a8f-6944-49c1-bd88-805e164ef454\" (UID: \"237a1a8f-6944-49c1-bd88-805e164ef454\") " Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.498004 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/237a1a8f-6944-49c1-bd88-805e164ef454-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "237a1a8f-6944-49c1-bd88-805e164ef454" (UID: "237a1a8f-6944-49c1-bd88-805e164ef454"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.504673 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237a1a8f-6944-49c1-bd88-805e164ef454-kube-api-access-h4fsj" (OuterVolumeSpecName: "kube-api-access-h4fsj") pod "237a1a8f-6944-49c1-bd88-805e164ef454" (UID: "237a1a8f-6944-49c1-bd88-805e164ef454"). InnerVolumeSpecName "kube-api-access-h4fsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.519773 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-0f17-account-create-update-htqfc" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.598979 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42jh7\" (UniqueName: \"kubernetes.io/projected/6622d76d-c899-462e-a3ee-137c25e2a9ad-kube-api-access-42jh7\") pod \"6622d76d-c899-462e-a3ee-137c25e2a9ad\" (UID: \"6622d76d-c899-462e-a3ee-137c25e2a9ad\") " Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.599217 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6622d76d-c899-462e-a3ee-137c25e2a9ad-operator-scripts\") pod \"6622d76d-c899-462e-a3ee-137c25e2a9ad\" (UID: \"6622d76d-c899-462e-a3ee-137c25e2a9ad\") " Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.599608 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6622d76d-c899-462e-a3ee-137c25e2a9ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6622d76d-c899-462e-a3ee-137c25e2a9ad" (UID: "6622d76d-c899-462e-a3ee-137c25e2a9ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.600187 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6622d76d-c899-462e-a3ee-137c25e2a9ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.600221 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/237a1a8f-6944-49c1-bd88-805e164ef454-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.600237 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4fsj\" (UniqueName: \"kubernetes.io/projected/237a1a8f-6944-49c1-bd88-805e164ef454-kube-api-access-h4fsj\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.604161 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6622d76d-c899-462e-a3ee-137c25e2a9ad-kube-api-access-42jh7" (OuterVolumeSpecName: "kube-api-access-42jh7") pod "6622d76d-c899-462e-a3ee-137c25e2a9ad" (UID: "6622d76d-c899-462e-a3ee-137c25e2a9ad"). InnerVolumeSpecName "kube-api-access-42jh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.703061 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42jh7\" (UniqueName: \"kubernetes.io/projected/6622d76d-c899-462e-a3ee-137c25e2a9ad-kube-api-access-42jh7\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.941378 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-0f17-account-create-update-htqfc" event={"ID":"6622d76d-c899-462e-a3ee-137c25e2a9ad","Type":"ContainerDied","Data":"7b3276aa1fad2c65f8d207c455b11f8f7879b4bacb1e11541d1b27e9a7566fb5"} Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.941878 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b3276aa1fad2c65f8d207c455b11f8f7879b4bacb1e11541d1b27e9a7566fb5" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.941568 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-0f17-account-create-update-htqfc" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.944091 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-xbr7n" event={"ID":"237a1a8f-6944-49c1-bd88-805e164ef454","Type":"ContainerDied","Data":"e46ab834be9843fd1a7ef21f0daf9895f068ec97ed15bf145805a80cfb3ca3bf"} Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.944135 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e46ab834be9843fd1a7ef21f0daf9895f068ec97ed15bf145805a80cfb3ca3bf" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.944183 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-xbr7n" Jan 29 09:01:09 crc kubenswrapper[4895]: I0129 09:01:09.963735 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0736f67-dad4-4550-8654-9de2217a8750","Type":"ContainerStarted","Data":"f6758220393245f1db12f1a820a304c157659efbbd0c4935d734f66ab6105fb7"} Jan 29 09:01:10 crc kubenswrapper[4895]: I0129 09:01:10.987346 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6","Type":"ContainerStarted","Data":"a72fdf022f9d463eeae94094d1386b085aec8dcb44a96793e7e470da873ce42c"} Jan 29 09:01:12 crc kubenswrapper[4895]: I0129 09:01:12.001394 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0736f67-dad4-4550-8654-9de2217a8750","Type":"ContainerStarted","Data":"d2316c9cff98a6ecf877120c210ff92ced277e32951b1989888b22ea09c53491"} Jan 29 09:01:12 crc kubenswrapper[4895]: I0129 09:01:12.001434 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e0736f67-dad4-4550-8654-9de2217a8750" containerName="glance-log" containerID="cri-o://f6758220393245f1db12f1a820a304c157659efbbd0c4935d734f66ab6105fb7" gracePeriod=30 Jan 29 09:01:12 crc kubenswrapper[4895]: I0129 09:01:12.002152 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" containerName="glance-log" containerID="cri-o://17b532f4b7d80ebe5c0108a8feda5d12f86e49a2342dbc55346dd54442da9232" gracePeriod=30 Jan 29 09:01:12 crc kubenswrapper[4895]: I0129 09:01:12.002187 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e0736f67-dad4-4550-8654-9de2217a8750" containerName="glance-httpd" containerID="cri-o://d2316c9cff98a6ecf877120c210ff92ced277e32951b1989888b22ea09c53491" gracePeriod=30 Jan 29 09:01:12 crc kubenswrapper[4895]: I0129 09:01:12.002245 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" containerName="glance-httpd" containerID="cri-o://a72fdf022f9d463eeae94094d1386b085aec8dcb44a96793e7e470da873ce42c" gracePeriod=30 Jan 29 09:01:12 crc kubenswrapper[4895]: I0129 09:01:12.035724 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.035692031 podStartE2EDuration="8.035692031s" podCreationTimestamp="2026-01-29 09:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:12.022286332 +0000 UTC m=+1213.663794478" watchObservedRunningTime="2026-01-29 09:01:12.035692031 +0000 UTC m=+1213.677200177" Jan 29 09:01:12 crc kubenswrapper[4895]: I0129 09:01:12.061201 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.061156932 podStartE2EDuration="8.061156932s" podCreationTimestamp="2026-01-29 09:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:12.052181191 +0000 UTC m=+1213.693689337" watchObservedRunningTime="2026-01-29 09:01:12.061156932 +0000 UTC m=+1213.702665078" Jan 29 09:01:13 crc kubenswrapper[4895]: I0129 09:01:13.016224 4895 generic.go:334] "Generic (PLEG): container finished" podID="8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" containerID="a72fdf022f9d463eeae94094d1386b085aec8dcb44a96793e7e470da873ce42c" exitCode=0 Jan 29 09:01:13 crc kubenswrapper[4895]: I0129 09:01:13.016621 4895 generic.go:334] "Generic (PLEG): container finished" podID="8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" containerID="17b532f4b7d80ebe5c0108a8feda5d12f86e49a2342dbc55346dd54442da9232" exitCode=143 Jan 29 09:01:13 crc kubenswrapper[4895]: I0129 09:01:13.016329 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6","Type":"ContainerDied","Data":"a72fdf022f9d463eeae94094d1386b085aec8dcb44a96793e7e470da873ce42c"} Jan 29 09:01:13 crc kubenswrapper[4895]: I0129 09:01:13.016743 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6","Type":"ContainerDied","Data":"17b532f4b7d80ebe5c0108a8feda5d12f86e49a2342dbc55346dd54442da9232"} Jan 29 09:01:13 crc kubenswrapper[4895]: I0129 09:01:13.020897 4895 generic.go:334] "Generic (PLEG): container finished" podID="3f1f1920-f2b7-48e3-a3c2-bba4280bfad2" containerID="01112b4ca8253d6cebd5d281ced6a739d13c93332d678dedfe7ff9541f3e090d" exitCode=0 Jan 29 09:01:13 crc kubenswrapper[4895]: I0129 09:01:13.020974 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s8csf" event={"ID":"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2","Type":"ContainerDied","Data":"01112b4ca8253d6cebd5d281ced6a739d13c93332d678dedfe7ff9541f3e090d"} Jan 29 09:01:13 crc kubenswrapper[4895]: I0129 09:01:13.034341 4895 generic.go:334] "Generic (PLEG): container finished" podID="e0736f67-dad4-4550-8654-9de2217a8750" containerID="d2316c9cff98a6ecf877120c210ff92ced277e32951b1989888b22ea09c53491" exitCode=0 Jan 29 09:01:13 crc kubenswrapper[4895]: I0129 09:01:13.034381 4895 generic.go:334] "Generic (PLEG): container finished" podID="e0736f67-dad4-4550-8654-9de2217a8750" containerID="f6758220393245f1db12f1a820a304c157659efbbd0c4935d734f66ab6105fb7" exitCode=143 Jan 29 09:01:13 crc kubenswrapper[4895]: I0129 09:01:13.034409 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0736f67-dad4-4550-8654-9de2217a8750","Type":"ContainerDied","Data":"d2316c9cff98a6ecf877120c210ff92ced277e32951b1989888b22ea09c53491"} Jan 29 09:01:13 crc kubenswrapper[4895]: I0129 09:01:13.034443 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0736f67-dad4-4550-8654-9de2217a8750","Type":"ContainerDied","Data":"f6758220393245f1db12f1a820a304c157659efbbd0c4935d734f66ab6105fb7"} Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.579283 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-4mm74"] Jan 29 09:01:14 crc kubenswrapper[4895]: E0129 09:01:14.580318 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca4002e-1bb2-40f5-861d-832fb18bb239" containerName="init" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.580336 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca4002e-1bb2-40f5-861d-832fb18bb239" containerName="init" Jan 29 09:01:14 crc kubenswrapper[4895]: E0129 09:01:14.580353 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6622d76d-c899-462e-a3ee-137c25e2a9ad" containerName="mariadb-account-create-update" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.580360 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6622d76d-c899-462e-a3ee-137c25e2a9ad" containerName="mariadb-account-create-update" Jan 29 09:01:14 crc kubenswrapper[4895]: E0129 09:01:14.580376 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237a1a8f-6944-49c1-bd88-805e164ef454" containerName="mariadb-database-create" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.580383 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="237a1a8f-6944-49c1-bd88-805e164ef454" containerName="mariadb-database-create" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.580558 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ca4002e-1bb2-40f5-861d-832fb18bb239" containerName="init" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.580572 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6622d76d-c899-462e-a3ee-137c25e2a9ad" containerName="mariadb-account-create-update" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.580581 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="237a1a8f-6944-49c1-bd88-805e164ef454" containerName="mariadb-database-create" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.581661 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.584581 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-dockercfg-zk8bm" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.585479 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.585751 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.596225 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-4mm74"] Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.632701 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/03042a97-0311-4d0c-9878-380987ec9407-config-data-merged\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.632771 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/03042a97-0311-4d0c-9878-380987ec9407-etc-podinfo\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.633033 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-config-data\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.633161 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-combined-ca-bundle\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.633193 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-scripts\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.633300 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pqdt\" (UniqueName: \"kubernetes.io/projected/03042a97-0311-4d0c-9878-380987ec9407-kube-api-access-5pqdt\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.735145 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/03042a97-0311-4d0c-9878-380987ec9407-config-data-merged\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.735226 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/03042a97-0311-4d0c-9878-380987ec9407-etc-podinfo\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.735300 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-config-data\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.735323 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-scripts\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.735339 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-combined-ca-bundle\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.735375 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pqdt\" (UniqueName: \"kubernetes.io/projected/03042a97-0311-4d0c-9878-380987ec9407-kube-api-access-5pqdt\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.737458 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/03042a97-0311-4d0c-9878-380987ec9407-config-data-merged\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.745011 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-combined-ca-bundle\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.746120 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-config-data\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.746126 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/03042a97-0311-4d0c-9878-380987ec9407-etc-podinfo\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.755478 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pqdt\" (UniqueName: \"kubernetes.io/projected/03042a97-0311-4d0c-9878-380987ec9407-kube-api-access-5pqdt\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.758202 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-scripts\") pod \"ironic-db-sync-4mm74\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:14 crc kubenswrapper[4895]: I0129 09:01:14.903715 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-4mm74" Jan 29 09:01:15 crc kubenswrapper[4895]: I0129 09:01:15.657140 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:01:15 crc kubenswrapper[4895]: I0129 09:01:15.734557 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-v5bw2"] Jan 29 09:01:15 crc kubenswrapper[4895]: I0129 09:01:15.734878 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" podUID="c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" containerName="dnsmasq-dns" containerID="cri-o://4e3780dfeca1f60ed185c9b8eaf152e2345370a2ad6837321c72cc73db13bed2" gracePeriod=10 Jan 29 09:01:16 crc kubenswrapper[4895]: I0129 09:01:16.085768 4895 generic.go:334] "Generic (PLEG): container finished" podID="c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" containerID="4e3780dfeca1f60ed185c9b8eaf152e2345370a2ad6837321c72cc73db13bed2" exitCode=0 Jan 29 09:01:16 crc kubenswrapper[4895]: I0129 09:01:16.085834 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" event={"ID":"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9","Type":"ContainerDied","Data":"4e3780dfeca1f60ed185c9b8eaf152e2345370a2ad6837321c72cc73db13bed2"} Jan 29 09:01:16 crc kubenswrapper[4895]: I0129 09:01:16.205394 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" podUID="c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.010905 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.098700 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s8csf" event={"ID":"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2","Type":"ContainerDied","Data":"8f0683fbbb9de4fb6a5e402cc229212eba89866482bd10ff0bbbdc5bbce0aea4"} Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.098758 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f0683fbbb9de4fb6a5e402cc229212eba89866482bd10ff0bbbdc5bbce0aea4" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.098809 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s8csf" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.197590 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-config-data\") pod \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.197857 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-scripts\") pod \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.197995 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-credential-keys\") pod \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.198131 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-combined-ca-bundle\") pod \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.198167 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hnwx\" (UniqueName: \"kubernetes.io/projected/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-kube-api-access-5hnwx\") pod \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.198300 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-fernet-keys\") pod \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\" (UID: \"3f1f1920-f2b7-48e3-a3c2-bba4280bfad2\") " Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.207037 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3f1f1920-f2b7-48e3-a3c2-bba4280bfad2" (UID: "3f1f1920-f2b7-48e3-a3c2-bba4280bfad2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.216203 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-kube-api-access-5hnwx" (OuterVolumeSpecName: "kube-api-access-5hnwx") pod "3f1f1920-f2b7-48e3-a3c2-bba4280bfad2" (UID: "3f1f1920-f2b7-48e3-a3c2-bba4280bfad2"). InnerVolumeSpecName "kube-api-access-5hnwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.216797 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-scripts" (OuterVolumeSpecName: "scripts") pod "3f1f1920-f2b7-48e3-a3c2-bba4280bfad2" (UID: "3f1f1920-f2b7-48e3-a3c2-bba4280bfad2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.228183 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3f1f1920-f2b7-48e3-a3c2-bba4280bfad2" (UID: "3f1f1920-f2b7-48e3-a3c2-bba4280bfad2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.238475 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f1f1920-f2b7-48e3-a3c2-bba4280bfad2" (UID: "3f1f1920-f2b7-48e3-a3c2-bba4280bfad2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.240839 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-config-data" (OuterVolumeSpecName: "config-data") pod "3f1f1920-f2b7-48e3-a3c2-bba4280bfad2" (UID: "3f1f1920-f2b7-48e3-a3c2-bba4280bfad2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.300073 4895 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.300116 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.300129 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hnwx\" (UniqueName: \"kubernetes.io/projected/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-kube-api-access-5hnwx\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.300143 4895 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.300156 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:17 crc kubenswrapper[4895]: I0129 09:01:17.300166 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.116802 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-s8csf"] Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.125975 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-s8csf"] Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.218280 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8dl7n"] Jan 29 09:01:18 crc kubenswrapper[4895]: E0129 09:01:18.218754 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1f1920-f2b7-48e3-a3c2-bba4280bfad2" containerName="keystone-bootstrap" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.218772 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1f1920-f2b7-48e3-a3c2-bba4280bfad2" containerName="keystone-bootstrap" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.219012 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1f1920-f2b7-48e3-a3c2-bba4280bfad2" containerName="keystone-bootstrap" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.220411 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.235094 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2twgr" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.235437 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.235513 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.239609 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.249235 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8dl7n"] Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.337063 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-credential-keys\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.337184 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-scripts\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.337224 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-config-data\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.337772 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-combined-ca-bundle\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.338024 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzg2s\" (UniqueName: \"kubernetes.io/projected/19c72a82-d987-4759-9d4f-be17355af27e-kube-api-access-tzg2s\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.338233 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-fernet-keys\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.439908 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-credential-keys\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.440030 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-scripts\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.440057 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-config-data\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.440100 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-combined-ca-bundle\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.440127 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzg2s\" (UniqueName: \"kubernetes.io/projected/19c72a82-d987-4759-9d4f-be17355af27e-kube-api-access-tzg2s\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.440160 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-fernet-keys\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.447447 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-credential-keys\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.448264 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-combined-ca-bundle\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.448271 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-fernet-keys\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.449228 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-config-data\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.457296 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-scripts\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.461113 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzg2s\" (UniqueName: \"kubernetes.io/projected/19c72a82-d987-4759-9d4f-be17355af27e-kube-api-access-tzg2s\") pod \"keystone-bootstrap-8dl7n\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:18 crc kubenswrapper[4895]: I0129 09:01:18.561279 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:19 crc kubenswrapper[4895]: I0129 09:01:19.227595 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f1f1920-f2b7-48e3-a3c2-bba4280bfad2" path="/var/lib/kubelet/pods/3f1f1920-f2b7-48e3-a3c2-bba4280bfad2/volumes" Jan 29 09:01:26 crc kubenswrapper[4895]: I0129 09:01:26.204554 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" podUID="c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: i/o timeout" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.020946 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.031727 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.082949 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-httpd-run\") pod \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.083022 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-ovsdbserver-nb\") pod \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.083050 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-config-data\") pod \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.083091 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-ovsdbserver-sb\") pod \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.083693 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" (UID: "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.084131 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-dns-svc\") pod \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.084167 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-internal-tls-certs\") pod \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.084253 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-dns-swift-storage-0\") pod \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.084317 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.084378 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-config\") pod \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.084403 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc28t\" (UniqueName: \"kubernetes.io/projected/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-kube-api-access-pc28t\") pod \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\" (UID: \"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9\") " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.084430 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-combined-ca-bundle\") pod \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.084497 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjkcf\" (UniqueName: \"kubernetes.io/projected/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-kube-api-access-gjkcf\") pod \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.084556 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-logs\") pod \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.084584 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-scripts\") pod \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\" (UID: \"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6\") " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.085127 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.086871 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-logs" (OuterVolumeSpecName: "logs") pod "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" (UID: "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.099950 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" (UID: "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.106459 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-kube-api-access-gjkcf" (OuterVolumeSpecName: "kube-api-access-gjkcf") pod "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" (UID: "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6"). InnerVolumeSpecName "kube-api-access-gjkcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.109003 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-scripts" (OuterVolumeSpecName: "scripts") pod "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" (UID: "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.109937 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-kube-api-access-pc28t" (OuterVolumeSpecName: "kube-api-access-pc28t") pod "c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" (UID: "c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9"). InnerVolumeSpecName "kube-api-access-pc28t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.170981 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" (UID: "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.174637 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" (UID: "c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.178265 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" (UID: "c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.187821 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.187862 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.187897 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.187908 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc28t\" (UniqueName: \"kubernetes.io/projected/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-kube-api-access-pc28t\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.187938 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.187947 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjkcf\" (UniqueName: \"kubernetes.io/projected/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-kube-api-access-gjkcf\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.187955 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.187962 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.188641 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-config-data" (OuterVolumeSpecName: "config-data") pod "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" (UID: "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.191526 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" (UID: "c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.195355 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" (UID: "8d4443cd-043c-4c9e-8ea6-40dcc4710ea6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.218671 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d4443cd-043c-4c9e-8ea6-40dcc4710ea6","Type":"ContainerDied","Data":"b190271b89f4cf4d523ef63f034d534e27785a469acd08b993c08d5f06c87505"} Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.218738 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.218781 4895 scope.go:117] "RemoveContainer" containerID="a72fdf022f9d463eeae94094d1386b085aec8dcb44a96793e7e470da873ce42c" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.222713 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-config" (OuterVolumeSpecName: "config") pod "c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" (UID: "c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.223698 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" event={"ID":"c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9","Type":"ContainerDied","Data":"1e6d82e1914123b0aafa367af25c4231e4a0c0e3c27b9c5fd532985dec67ea3b"} Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.223794 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.224269 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.250079 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" (UID: "c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.288104 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.289468 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.289490 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.289502 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.289512 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.289521 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.289533 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.326897 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.337042 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:01:28 crc kubenswrapper[4895]: E0129 09:01:28.337518 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" containerName="glance-httpd" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.337557 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" containerName="glance-httpd" Jan 29 09:01:28 crc kubenswrapper[4895]: E0129 09:01:28.337597 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" containerName="dnsmasq-dns" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.337606 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" containerName="dnsmasq-dns" Jan 29 09:01:28 crc kubenswrapper[4895]: E0129 09:01:28.337622 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" containerName="init" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.337630 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" containerName="init" Jan 29 09:01:28 crc kubenswrapper[4895]: E0129 09:01:28.337647 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" containerName="glance-log" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.337655 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" containerName="glance-log" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.337817 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" containerName="dnsmasq-dns" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.337836 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" containerName="glance-httpd" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.337847 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" containerName="glance-log" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.338871 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.341371 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.341631 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.352500 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.390946 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b56c025d-f59c-402b-8ad8-072e78d3b776-logs\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.391027 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.391067 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.391090 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.391169 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.391192 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdv2s\" (UniqueName: \"kubernetes.io/projected/b56c025d-f59c-402b-8ad8-072e78d3b776-kube-api-access-sdv2s\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.391217 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.391238 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b56c025d-f59c-402b-8ad8-072e78d3b776-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.493559 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.493643 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdv2s\" (UniqueName: \"kubernetes.io/projected/b56c025d-f59c-402b-8ad8-072e78d3b776-kube-api-access-sdv2s\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.493688 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.493722 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b56c025d-f59c-402b-8ad8-072e78d3b776-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.493758 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b56c025d-f59c-402b-8ad8-072e78d3b776-logs\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.493804 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.493840 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.493861 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.494863 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b56c025d-f59c-402b-8ad8-072e78d3b776-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.495136 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.499059 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.499475 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b56c025d-f59c-402b-8ad8-072e78d3b776-logs\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.501197 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.504702 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.506198 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.517046 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdv2s\" (UniqueName: \"kubernetes.io/projected/b56c025d-f59c-402b-8ad8-072e78d3b776-kube-api-access-sdv2s\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.531015 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.582391 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-v5bw2"] Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.597479 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-v5bw2"] Jan 29 09:01:28 crc kubenswrapper[4895]: I0129 09:01:28.658874 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.231022 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d4443cd-043c-4c9e-8ea6-40dcc4710ea6" path="/var/lib/kubelet/pods/8d4443cd-043c-4c9e-8ea6-40dcc4710ea6/volumes" Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.234834 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" path="/var/lib/kubelet/pods/c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9/volumes" Jan 29 09:01:29 crc kubenswrapper[4895]: E0129 09:01:29.501853 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 29 09:01:29 crc kubenswrapper[4895]: E0129 09:01:29.502114 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5nz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-xqv89_openstack(d3fc9317-29e0-4b1c-b598-5d95fc98a1e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.503363 4895 scope.go:117] "RemoveContainer" containerID="17b532f4b7d80ebe5c0108a8feda5d12f86e49a2342dbc55346dd54442da9232" Jan 29 09:01:29 crc kubenswrapper[4895]: E0129 09:01:29.503458 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-xqv89" podUID="d3fc9317-29e0-4b1c-b598-5d95fc98a1e7" Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.715942 4895 scope.go:117] "RemoveContainer" containerID="4e3780dfeca1f60ed185c9b8eaf152e2345370a2ad6837321c72cc73db13bed2" Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.846202 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.952447 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0736f67-dad4-4550-8654-9de2217a8750-logs\") pod \"e0736f67-dad4-4550-8654-9de2217a8750\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.952544 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-scripts\") pod \"e0736f67-dad4-4550-8654-9de2217a8750\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.952568 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-combined-ca-bundle\") pod \"e0736f67-dad4-4550-8654-9de2217a8750\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.952637 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-public-tls-certs\") pod \"e0736f67-dad4-4550-8654-9de2217a8750\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.952701 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-config-data\") pod \"e0736f67-dad4-4550-8654-9de2217a8750\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.952863 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"e0736f67-dad4-4550-8654-9de2217a8750\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.953046 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwmr5\" (UniqueName: \"kubernetes.io/projected/e0736f67-dad4-4550-8654-9de2217a8750-kube-api-access-jwmr5\") pod \"e0736f67-dad4-4550-8654-9de2217a8750\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.953121 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0736f67-dad4-4550-8654-9de2217a8750-httpd-run\") pod \"e0736f67-dad4-4550-8654-9de2217a8750\" (UID: \"e0736f67-dad4-4550-8654-9de2217a8750\") " Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.954539 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0736f67-dad4-4550-8654-9de2217a8750-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e0736f67-dad4-4550-8654-9de2217a8750" (UID: "e0736f67-dad4-4550-8654-9de2217a8750"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.956310 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0736f67-dad4-4550-8654-9de2217a8750-logs" (OuterVolumeSpecName: "logs") pod "e0736f67-dad4-4550-8654-9de2217a8750" (UID: "e0736f67-dad4-4550-8654-9de2217a8750"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.962983 4895 scope.go:117] "RemoveContainer" containerID="8a484f4c6b2f116bf5606f0f05902166a8551c19293ae4490c591cb360bcea37" Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.972453 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "e0736f67-dad4-4550-8654-9de2217a8750" (UID: "e0736f67-dad4-4550-8654-9de2217a8750"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.977255 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0736f67-dad4-4550-8654-9de2217a8750-kube-api-access-jwmr5" (OuterVolumeSpecName: "kube-api-access-jwmr5") pod "e0736f67-dad4-4550-8654-9de2217a8750" (UID: "e0736f67-dad4-4550-8654-9de2217a8750"). InnerVolumeSpecName "kube-api-access-jwmr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:29 crc kubenswrapper[4895]: I0129 09:01:29.978057 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-scripts" (OuterVolumeSpecName: "scripts") pod "e0736f67-dad4-4550-8654-9de2217a8750" (UID: "e0736f67-dad4-4550-8654-9de2217a8750"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.025349 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0736f67-dad4-4550-8654-9de2217a8750" (UID: "e0736f67-dad4-4550-8654-9de2217a8750"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.055755 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.055799 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwmr5\" (UniqueName: \"kubernetes.io/projected/e0736f67-dad4-4550-8654-9de2217a8750-kube-api-access-jwmr5\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.055813 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0736f67-dad4-4550-8654-9de2217a8750-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.055823 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0736f67-dad4-4550-8654-9de2217a8750-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.055831 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.055839 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.068909 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e0736f67-dad4-4550-8654-9de2217a8750" (UID: "e0736f67-dad4-4550-8654-9de2217a8750"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.080493 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.083435 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-config-data" (OuterVolumeSpecName: "config-data") pod "e0736f67-dad4-4550-8654-9de2217a8750" (UID: "e0736f67-dad4-4550-8654-9de2217a8750"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.110090 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8dl7n"] Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.124722 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-4mm74"] Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.157514 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.157682 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0736f67-dad4-4550-8654-9de2217a8750-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.157763 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.254405 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8dl7n" event={"ID":"19c72a82-d987-4759-9d4f-be17355af27e","Type":"ContainerStarted","Data":"66fbd27236680e46ac5aa6735110f3d73bc56fbe173787b18b95d3501e464e0c"} Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.264041 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-4mm74" event={"ID":"03042a97-0311-4d0c-9878-380987ec9407","Type":"ContainerStarted","Data":"1403361c1b2d10be87375e74e9c05a0dd1f65671d7353b8b08125b89702787b6"} Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.269336 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.269497 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0736f67-dad4-4550-8654-9de2217a8750","Type":"ContainerDied","Data":"992103af01f6e1420ec71f1ce5996b75f4082dc67378b06f8e62bb395174e453"} Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.269536 4895 scope.go:117] "RemoveContainer" containerID="d2316c9cff98a6ecf877120c210ff92ced277e32951b1989888b22ea09c53491" Jan 29 09:01:30 crc kubenswrapper[4895]: E0129 09:01:30.285435 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-xqv89" podUID="d3fc9317-29e0-4b1c-b598-5d95fc98a1e7" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.318109 4895 scope.go:117] "RemoveContainer" containerID="f6758220393245f1db12f1a820a304c157659efbbd0c4935d734f66ab6105fb7" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.349235 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.370119 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.389084 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:01:30 crc kubenswrapper[4895]: E0129 09:01:30.389687 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0736f67-dad4-4550-8654-9de2217a8750" containerName="glance-log" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.389766 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0736f67-dad4-4550-8654-9de2217a8750" containerName="glance-log" Jan 29 09:01:30 crc kubenswrapper[4895]: E0129 09:01:30.389847 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0736f67-dad4-4550-8654-9de2217a8750" containerName="glance-httpd" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.389895 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0736f67-dad4-4550-8654-9de2217a8750" containerName="glance-httpd" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.390189 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0736f67-dad4-4550-8654-9de2217a8750" containerName="glance-log" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.390255 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0736f67-dad4-4550-8654-9de2217a8750" containerName="glance-httpd" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.391622 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.394450 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.395063 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.401641 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:01:30 crc kubenswrapper[4895]: W0129 09:01:30.444699 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb56c025d_f59c_402b_8ad8_072e78d3b776.slice/crio-93dd0ad8d3b7a6ee99134429c00ed1df5c308d0e4f92198ece399d1ed1cb1b24 WatchSource:0}: Error finding container 93dd0ad8d3b7a6ee99134429c00ed1df5c308d0e4f92198ece399d1ed1cb1b24: Status 404 returned error can't find the container with id 93dd0ad8d3b7a6ee99134429c00ed1df5c308d0e4f92198ece399d1ed1cb1b24 Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.449099 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.467416 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4whcw\" (UniqueName: \"kubernetes.io/projected/53f417a8-012d-4063-b1b8-e60f50fbf8ae-kube-api-access-4whcw\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.468322 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-scripts\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.468498 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-config-data\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.468603 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53f417a8-012d-4063-b1b8-e60f50fbf8ae-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.468715 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.468810 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.468976 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f417a8-012d-4063-b1b8-e60f50fbf8ae-logs\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.469063 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.571030 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4whcw\" (UniqueName: \"kubernetes.io/projected/53f417a8-012d-4063-b1b8-e60f50fbf8ae-kube-api-access-4whcw\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.571110 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-scripts\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.571163 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-config-data\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.571219 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53f417a8-012d-4063-b1b8-e60f50fbf8ae-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.571265 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.571291 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.571333 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f417a8-012d-4063-b1b8-e60f50fbf8ae-logs\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.571373 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.572472 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f417a8-012d-4063-b1b8-e60f50fbf8ae-logs\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.574047 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53f417a8-012d-4063-b1b8-e60f50fbf8ae-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.574904 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.581454 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.582476 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-config-data\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.582990 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-scripts\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.587270 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.590381 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4whcw\" (UniqueName: \"kubernetes.io/projected/53f417a8-012d-4063-b1b8-e60f50fbf8ae-kube-api-access-4whcw\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.639306 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " pod="openstack/glance-default-external-api-0" Jan 29 09:01:30 crc kubenswrapper[4895]: I0129 09:01:30.728991 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:01:31 crc kubenswrapper[4895]: I0129 09:01:31.206355 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-v5bw2" podUID="c20f0ab7-c3fb-40ba-8abe-ad8105c29ff9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: i/o timeout" Jan 29 09:01:31 crc kubenswrapper[4895]: I0129 09:01:31.233839 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0736f67-dad4-4550-8654-9de2217a8750" path="/var/lib/kubelet/pods/e0736f67-dad4-4550-8654-9de2217a8750/volumes" Jan 29 09:01:31 crc kubenswrapper[4895]: I0129 09:01:31.328872 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jlpfn" event={"ID":"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa","Type":"ContainerStarted","Data":"ca364c4f2a5a434ec260354594ac049a4e6ec2a459bef0852d6fe25b082cda24"} Jan 29 09:01:31 crc kubenswrapper[4895]: I0129 09:01:31.345627 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee","Type":"ContainerStarted","Data":"d357bf26a0a34fec8373f6f84fb931011dc103ddf505bcd3608bef7bf78df7f1"} Jan 29 09:01:31 crc kubenswrapper[4895]: I0129 09:01:31.348235 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8dl7n" event={"ID":"19c72a82-d987-4759-9d4f-be17355af27e","Type":"ContainerStarted","Data":"7988cad9ada011a7a3ff18077aa4aff4c5b53515df6c7184d51131d34c3eb5eb"} Jan 29 09:01:31 crc kubenswrapper[4895]: I0129 09:01:31.353905 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b56c025d-f59c-402b-8ad8-072e78d3b776","Type":"ContainerStarted","Data":"6340fa4e8500894fe5fdd4e8727a9c964dde4935e0e16ed68276fec030e46b14"} Jan 29 09:01:31 crc kubenswrapper[4895]: I0129 09:01:31.354374 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b56c025d-f59c-402b-8ad8-072e78d3b776","Type":"ContainerStarted","Data":"93dd0ad8d3b7a6ee99134429c00ed1df5c308d0e4f92198ece399d1ed1cb1b24"} Jan 29 09:01:31 crc kubenswrapper[4895]: I0129 09:01:31.359772 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-jlpfn" podStartSLOduration=4.953817694 podStartE2EDuration="27.35974347s" podCreationTimestamp="2026-01-29 09:01:04 +0000 UTC" firstStartedPulling="2026-01-29 09:01:07.059894426 +0000 UTC m=+1208.701402572" lastFinishedPulling="2026-01-29 09:01:29.465820202 +0000 UTC m=+1231.107328348" observedRunningTime="2026-01-29 09:01:31.355313862 +0000 UTC m=+1232.996822018" watchObservedRunningTime="2026-01-29 09:01:31.35974347 +0000 UTC m=+1233.001251626" Jan 29 09:01:31 crc kubenswrapper[4895]: I0129 09:01:31.364706 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qd4f9" event={"ID":"1077fb9f-c6a3-416d-a7c9-011dd8954ab1","Type":"ContainerStarted","Data":"120e4051586881d34b1aeb09a36b8d00c02da0d4e2fc0f47da31bc517f1b0cc8"} Jan 29 09:01:31 crc kubenswrapper[4895]: I0129 09:01:31.389591 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8dl7n" podStartSLOduration=13.389570027 podStartE2EDuration="13.389570027s" podCreationTimestamp="2026-01-29 09:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:31.375450979 +0000 UTC m=+1233.016959145" watchObservedRunningTime="2026-01-29 09:01:31.389570027 +0000 UTC m=+1233.031078173" Jan 29 09:01:31 crc kubenswrapper[4895]: W0129 09:01:31.415016 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53f417a8_012d_4063_b1b8_e60f50fbf8ae.slice/crio-837a29b0bbe27816c8233f6ef08564f7e504846e39ca5c6158fac4b34ad2ae2b WatchSource:0}: Error finding container 837a29b0bbe27816c8233f6ef08564f7e504846e39ca5c6158fac4b34ad2ae2b: Status 404 returned error can't find the container with id 837a29b0bbe27816c8233f6ef08564f7e504846e39ca5c6158fac4b34ad2ae2b Jan 29 09:01:31 crc kubenswrapper[4895]: I0129 09:01:31.420677 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:01:31 crc kubenswrapper[4895]: I0129 09:01:31.431628 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-qd4f9" podStartSLOduration=4.563515821 podStartE2EDuration="27.43160744s" podCreationTimestamp="2026-01-29 09:01:04 +0000 UTC" firstStartedPulling="2026-01-29 09:01:06.652505178 +0000 UTC m=+1208.294013324" lastFinishedPulling="2026-01-29 09:01:29.520596797 +0000 UTC m=+1231.162104943" observedRunningTime="2026-01-29 09:01:31.396964095 +0000 UTC m=+1233.038472271" watchObservedRunningTime="2026-01-29 09:01:31.43160744 +0000 UTC m=+1233.073115586" Jan 29 09:01:32 crc kubenswrapper[4895]: I0129 09:01:32.400165 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53f417a8-012d-4063-b1b8-e60f50fbf8ae","Type":"ContainerStarted","Data":"728474e3554cd05d244bf875d5bf9615eaa97a07cfe2f3dd31a5c25d884b9b93"} Jan 29 09:01:32 crc kubenswrapper[4895]: I0129 09:01:32.401113 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53f417a8-012d-4063-b1b8-e60f50fbf8ae","Type":"ContainerStarted","Data":"837a29b0bbe27816c8233f6ef08564f7e504846e39ca5c6158fac4b34ad2ae2b"} Jan 29 09:01:32 crc kubenswrapper[4895]: I0129 09:01:32.411515 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b56c025d-f59c-402b-8ad8-072e78d3b776","Type":"ContainerStarted","Data":"8e00c9b1edea6840545c4e3f417d54a872460f1668d288f2b62b3044b0eb35c5"} Jan 29 09:01:32 crc kubenswrapper[4895]: I0129 09:01:32.453762 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.453720229 podStartE2EDuration="4.453720229s" podCreationTimestamp="2026-01-29 09:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:32.437640449 +0000 UTC m=+1234.079148595" watchObservedRunningTime="2026-01-29 09:01:32.453720229 +0000 UTC m=+1234.095228375" Jan 29 09:01:33 crc kubenswrapper[4895]: I0129 09:01:33.418910 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53f417a8-012d-4063-b1b8-e60f50fbf8ae","Type":"ContainerStarted","Data":"504cc3f5f228f93b4ad6e247c43ab7a494129cc60fef4bd8d37e9a649d0e74ff"} Jan 29 09:01:33 crc kubenswrapper[4895]: I0129 09:01:33.423813 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee","Type":"ContainerStarted","Data":"a960aa03999fc78683d8dddb933003aba863fb1fe3fa19a361f929143814fed7"} Jan 29 09:01:33 crc kubenswrapper[4895]: I0129 09:01:33.453478 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.453444094 podStartE2EDuration="3.453444094s" podCreationTimestamp="2026-01-29 09:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:33.44357995 +0000 UTC m=+1235.085088106" watchObservedRunningTime="2026-01-29 09:01:33.453444094 +0000 UTC m=+1235.094952240" Jan 29 09:01:36 crc kubenswrapper[4895]: I0129 09:01:36.495953 4895 generic.go:334] "Generic (PLEG): container finished" podID="1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa" containerID="ca364c4f2a5a434ec260354594ac049a4e6ec2a459bef0852d6fe25b082cda24" exitCode=0 Jan 29 09:01:36 crc kubenswrapper[4895]: I0129 09:01:36.495997 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jlpfn" event={"ID":"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa","Type":"ContainerDied","Data":"ca364c4f2a5a434ec260354594ac049a4e6ec2a459bef0852d6fe25b082cda24"} Jan 29 09:01:36 crc kubenswrapper[4895]: I0129 09:01:36.502700 4895 generic.go:334] "Generic (PLEG): container finished" podID="19c72a82-d987-4759-9d4f-be17355af27e" containerID="7988cad9ada011a7a3ff18077aa4aff4c5b53515df6c7184d51131d34c3eb5eb" exitCode=0 Jan 29 09:01:36 crc kubenswrapper[4895]: I0129 09:01:36.502763 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8dl7n" event={"ID":"19c72a82-d987-4759-9d4f-be17355af27e","Type":"ContainerDied","Data":"7988cad9ada011a7a3ff18077aa4aff4c5b53515df6c7184d51131d34c3eb5eb"} Jan 29 09:01:37 crc kubenswrapper[4895]: I0129 09:01:37.973066 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:37 crc kubenswrapper[4895]: I0129 09:01:37.986841 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.062969 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzg2s\" (UniqueName: \"kubernetes.io/projected/19c72a82-d987-4759-9d4f-be17355af27e-kube-api-access-tzg2s\") pod \"19c72a82-d987-4759-9d4f-be17355af27e\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.063075 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-combined-ca-bundle\") pod \"19c72a82-d987-4759-9d4f-be17355af27e\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.063115 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-scripts\") pod \"19c72a82-d987-4759-9d4f-be17355af27e\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.063135 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-config-data\") pod \"19c72a82-d987-4759-9d4f-be17355af27e\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.063175 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-credential-keys\") pod \"19c72a82-d987-4759-9d4f-be17355af27e\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.063372 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-fernet-keys\") pod \"19c72a82-d987-4759-9d4f-be17355af27e\" (UID: \"19c72a82-d987-4759-9d4f-be17355af27e\") " Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.070134 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-scripts" (OuterVolumeSpecName: "scripts") pod "19c72a82-d987-4759-9d4f-be17355af27e" (UID: "19c72a82-d987-4759-9d4f-be17355af27e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.070323 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "19c72a82-d987-4759-9d4f-be17355af27e" (UID: "19c72a82-d987-4759-9d4f-be17355af27e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.071450 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19c72a82-d987-4759-9d4f-be17355af27e-kube-api-access-tzg2s" (OuterVolumeSpecName: "kube-api-access-tzg2s") pod "19c72a82-d987-4759-9d4f-be17355af27e" (UID: "19c72a82-d987-4759-9d4f-be17355af27e"). InnerVolumeSpecName "kube-api-access-tzg2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.072275 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "19c72a82-d987-4759-9d4f-be17355af27e" (UID: "19c72a82-d987-4759-9d4f-be17355af27e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.091740 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-config-data" (OuterVolumeSpecName: "config-data") pod "19c72a82-d987-4759-9d4f-be17355af27e" (UID: "19c72a82-d987-4759-9d4f-be17355af27e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.100447 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19c72a82-d987-4759-9d4f-be17355af27e" (UID: "19c72a82-d987-4759-9d4f-be17355af27e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.165938 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86788\" (UniqueName: \"kubernetes.io/projected/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-kube-api-access-86788\") pod \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.166041 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-scripts\") pod \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.166361 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-combined-ca-bundle\") pod \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.166491 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-logs\") pod \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.166549 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-config-data\") pod \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\" (UID: \"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa\") " Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.167458 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-logs" (OuterVolumeSpecName: "logs") pod "1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa" (UID: "1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.168754 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzg2s\" (UniqueName: \"kubernetes.io/projected/19c72a82-d987-4759-9d4f-be17355af27e-kube-api-access-tzg2s\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.168803 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.168823 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.168840 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.168857 4895 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.168871 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.168885 4895 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19c72a82-d987-4759-9d4f-be17355af27e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.171708 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-kube-api-access-86788" (OuterVolumeSpecName: "kube-api-access-86788") pod "1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa" (UID: "1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa"). InnerVolumeSpecName "kube-api-access-86788". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.173129 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-scripts" (OuterVolumeSpecName: "scripts") pod "1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa" (UID: "1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.194226 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa" (UID: "1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.196372 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-config-data" (OuterVolumeSpecName: "config-data") pod "1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa" (UID: "1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.270828 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.270879 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.270894 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86788\" (UniqueName: \"kubernetes.io/projected/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-kube-api-access-86788\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.270912 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.528657 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-4mm74" event={"ID":"03042a97-0311-4d0c-9878-380987ec9407","Type":"ContainerStarted","Data":"56833c150f9292e4b2b93fc1608f8a38b1d7bcd517652999062ff357d17e1ece"} Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.533738 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8dl7n" event={"ID":"19c72a82-d987-4759-9d4f-be17355af27e","Type":"ContainerDied","Data":"66fbd27236680e46ac5aa6735110f3d73bc56fbe173787b18b95d3501e464e0c"} Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.533796 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8dl7n" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.533812 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66fbd27236680e46ac5aa6735110f3d73bc56fbe173787b18b95d3501e464e0c" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.537382 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jlpfn" event={"ID":"1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa","Type":"ContainerDied","Data":"b050aea8ad48ed95fa103f7761ddf96e964713b5f7fea35ce1f4c658667c2fdb"} Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.537450 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b050aea8ad48ed95fa103f7761ddf96e964713b5f7fea35ce1f4c658667c2fdb" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.537408 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jlpfn" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.659779 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.661045 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.745786 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-56569dbddd-srzk5"] Jan 29 09:01:38 crc kubenswrapper[4895]: E0129 09:01:38.748014 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa" containerName="placement-db-sync" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.748040 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa" containerName="placement-db-sync" Jan 29 09:01:38 crc kubenswrapper[4895]: E0129 09:01:38.748070 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c72a82-d987-4759-9d4f-be17355af27e" containerName="keystone-bootstrap" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.748077 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c72a82-d987-4759-9d4f-be17355af27e" containerName="keystone-bootstrap" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.752848 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c72a82-d987-4759-9d4f-be17355af27e" containerName="keystone-bootstrap" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.758285 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa" containerName="placement-db-sync" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.772118 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.775641 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.787989 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.793569 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5f75d78756-glzhf"] Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.795871 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.796379 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6v7bg" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.796651 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.798157 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.799319 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.846907 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.867734 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f75d78756-glzhf"] Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.868705 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.868998 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.869189 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.869793 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.869987 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2twgr" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.870355 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.895950 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56569dbddd-srzk5"] Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.906394 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsnzq\" (UniqueName: \"kubernetes.io/projected/d2537c60-3372-4ac4-b801-808c93e9cf6f-kube-api-access-bsnzq\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.906462 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-scripts\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.906509 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-fernet-keys\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.906548 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-public-tls-certs\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.906595 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2537c60-3372-4ac4-b801-808c93e9cf6f-logs\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.906640 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-config-data\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.906669 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-config-data\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.906785 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-scripts\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.906829 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-internal-tls-certs\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.906860 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjbfn\" (UniqueName: \"kubernetes.io/projected/406e8af5-68c1-48c3-b377-68d3f60c10a9-kube-api-access-wjbfn\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.906881 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-credential-keys\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.906934 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-internal-tls-certs\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.906989 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-combined-ca-bundle\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.907025 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-public-tls-certs\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:38 crc kubenswrapper[4895]: I0129 09:01:38.907063 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-combined-ca-bundle\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008228 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-fernet-keys\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008287 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-public-tls-certs\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008319 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2537c60-3372-4ac4-b801-808c93e9cf6f-logs\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008348 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-config-data\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008367 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-config-data\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008416 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-scripts\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008441 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-internal-tls-certs\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008467 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjbfn\" (UniqueName: \"kubernetes.io/projected/406e8af5-68c1-48c3-b377-68d3f60c10a9-kube-api-access-wjbfn\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008485 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-credential-keys\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008520 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-internal-tls-certs\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008553 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-combined-ca-bundle\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008572 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-public-tls-certs\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008595 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-combined-ca-bundle\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008672 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsnzq\" (UniqueName: \"kubernetes.io/projected/d2537c60-3372-4ac4-b801-808c93e9cf6f-kube-api-access-bsnzq\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.008723 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-scripts\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.009039 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2537c60-3372-4ac4-b801-808c93e9cf6f-logs\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.014083 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-fernet-keys\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.015407 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-scripts\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.017191 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-credential-keys\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.019139 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-combined-ca-bundle\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.020373 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-config-data\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.021334 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-config-data\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.021581 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-scripts\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.025068 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-combined-ca-bundle\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.025946 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-public-tls-certs\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.026793 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-public-tls-certs\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.031872 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/406e8af5-68c1-48c3-b377-68d3f60c10a9-internal-tls-certs\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.037122 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsnzq\" (UniqueName: \"kubernetes.io/projected/d2537c60-3372-4ac4-b801-808c93e9cf6f-kube-api-access-bsnzq\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.038998 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjbfn\" (UniqueName: \"kubernetes.io/projected/406e8af5-68c1-48c3-b377-68d3f60c10a9-kube-api-access-wjbfn\") pod \"keystone-5f75d78756-glzhf\" (UID: \"406e8af5-68c1-48c3-b377-68d3f60c10a9\") " pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.048706 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-internal-tls-certs\") pod \"placement-56569dbddd-srzk5\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.206410 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.300799 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.553489 4895 generic.go:334] "Generic (PLEG): container finished" podID="03042a97-0311-4d0c-9878-380987ec9407" containerID="56833c150f9292e4b2b93fc1608f8a38b1d7bcd517652999062ff357d17e1ece" exitCode=0 Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.554047 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-4mm74" event={"ID":"03042a97-0311-4d0c-9878-380987ec9407","Type":"ContainerDied","Data":"56833c150f9292e4b2b93fc1608f8a38b1d7bcd517652999062ff357d17e1ece"} Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.564542 4895 generic.go:334] "Generic (PLEG): container finished" podID="1077fb9f-c6a3-416d-a7c9-011dd8954ab1" containerID="120e4051586881d34b1aeb09a36b8d00c02da0d4e2fc0f47da31bc517f1b0cc8" exitCode=0 Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.565889 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qd4f9" event={"ID":"1077fb9f-c6a3-416d-a7c9-011dd8954ab1","Type":"ContainerDied","Data":"120e4051586881d34b1aeb09a36b8d00c02da0d4e2fc0f47da31bc517f1b0cc8"} Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.566039 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.566286 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:39 crc kubenswrapper[4895]: E0129 09:01:39.872464 4895 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 29 09:01:39 crc kubenswrapper[4895]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/03042a97-0311-4d0c-9878-380987ec9407/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory Jan 29 09:01:39 crc kubenswrapper[4895]: > podSandboxID="1403361c1b2d10be87375e74e9c05a0dd1f65671d7353b8b08125b89702787b6" Jan 29 09:01:39 crc kubenswrapper[4895]: E0129 09:01:39.873225 4895 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 09:01:39 crc kubenswrapper[4895]: container &Container{Name:ironic-db-sync,Image:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/container-scripts/dbsync.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-merged,ReadOnly:false,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-podinfo,ReadOnly:false,MountPath:/etc/podinfo,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pqdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-db-sync-4mm74_openstack(03042a97-0311-4d0c-9878-380987ec9407): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/03042a97-0311-4d0c-9878-380987ec9407/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory Jan 29 09:01:39 crc kubenswrapper[4895]: > logger="UnhandledError" Jan 29 09:01:39 crc kubenswrapper[4895]: E0129 09:01:39.874631 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-db-sync\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/03042a97-0311-4d0c-9878-380987ec9407/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory\\n\"" pod="openstack/ironic-db-sync-4mm74" podUID="03042a97-0311-4d0c-9878-380987ec9407" Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.904708 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56569dbddd-srzk5"] Jan 29 09:01:39 crc kubenswrapper[4895]: I0129 09:01:39.929005 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f75d78756-glzhf"] Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.207865 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-796fb887fb-dd2s5"] Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.210476 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.217867 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-796fb887fb-dd2s5"] Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.347701 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7b9632-7a45-48f5-8887-4c79543170fd-logs\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.347871 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-config-data\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.348069 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-combined-ca-bundle\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.348101 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-internal-tls-certs\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.348141 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5t6n\" (UniqueName: \"kubernetes.io/projected/2e7b9632-7a45-48f5-8887-4c79543170fd-kube-api-access-j5t6n\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.348176 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-scripts\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.348212 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-public-tls-certs\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.453542 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5t6n\" (UniqueName: \"kubernetes.io/projected/2e7b9632-7a45-48f5-8887-4c79543170fd-kube-api-access-j5t6n\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.453646 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-scripts\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.453735 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-public-tls-certs\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.453779 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7b9632-7a45-48f5-8887-4c79543170fd-logs\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.453826 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-config-data\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.454062 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-internal-tls-certs\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.454099 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-combined-ca-bundle\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.457215 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7b9632-7a45-48f5-8887-4c79543170fd-logs\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.466454 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-scripts\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.466688 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-internal-tls-certs\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.466812 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-config-data\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.467049 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-combined-ca-bundle\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.467406 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e7b9632-7a45-48f5-8887-4c79543170fd-public-tls-certs\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.475001 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5t6n\" (UniqueName: \"kubernetes.io/projected/2e7b9632-7a45-48f5-8887-4c79543170fd-kube-api-access-j5t6n\") pod \"placement-796fb887fb-dd2s5\" (UID: \"2e7b9632-7a45-48f5-8887-4c79543170fd\") " pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.539748 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.586336 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56569dbddd-srzk5" event={"ID":"d2537c60-3372-4ac4-b801-808c93e9cf6f","Type":"ContainerStarted","Data":"6072e11f24fb74d79213dd357f09ecf4eade987e75900d63c8c2f3c6fc544655"} Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.586416 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56569dbddd-srzk5" event={"ID":"d2537c60-3372-4ac4-b801-808c93e9cf6f","Type":"ContainerStarted","Data":"e4b44bacaea02ed29aae1c939f225fc040458159b14a315ded32563607144072"} Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.593231 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f75d78756-glzhf" event={"ID":"406e8af5-68c1-48c3-b377-68d3f60c10a9","Type":"ContainerStarted","Data":"0c6cb668188c472d80148c62c78a53b349672d734e87c20444e683992e9b0b0e"} Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.593306 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f75d78756-glzhf" event={"ID":"406e8af5-68c1-48c3-b377-68d3f60c10a9","Type":"ContainerStarted","Data":"39fd97004da3677c492fca820db1c388db4c300ed1f528f35831ba775988a6dd"} Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.593849 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.599552 4895 generic.go:334] "Generic (PLEG): container finished" podID="75739842-3c97-4e9a-b13b-4fb5929461b8" containerID="4037c578807da950a84229413bdc56f16362aef531973ed7facc667f91bf8152" exitCode=0 Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.599982 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-95b5h" event={"ID":"75739842-3c97-4e9a-b13b-4fb5929461b8","Type":"ContainerDied","Data":"4037c578807da950a84229413bdc56f16362aef531973ed7facc667f91bf8152"} Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.642195 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5f75d78756-glzhf" podStartSLOduration=2.6421721700000003 podStartE2EDuration="2.64217217s" podCreationTimestamp="2026-01-29 09:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:40.625094163 +0000 UTC m=+1242.266602329" watchObservedRunningTime="2026-01-29 09:01:40.64217217 +0000 UTC m=+1242.283680316" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.730993 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.731518 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.835024 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 09:01:40 crc kubenswrapper[4895]: I0129 09:01:40.843393 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.082663 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qd4f9" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.195274 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-db-sync-config-data\") pod \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\" (UID: \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\") " Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.195383 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5x5k\" (UniqueName: \"kubernetes.io/projected/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-kube-api-access-x5x5k\") pod \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\" (UID: \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\") " Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.195437 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-combined-ca-bundle\") pod \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\" (UID: \"1077fb9f-c6a3-416d-a7c9-011dd8954ab1\") " Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.208212 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1077fb9f-c6a3-416d-a7c9-011dd8954ab1" (UID: "1077fb9f-c6a3-416d-a7c9-011dd8954ab1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.230243 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-kube-api-access-x5x5k" (OuterVolumeSpecName: "kube-api-access-x5x5k") pod "1077fb9f-c6a3-416d-a7c9-011dd8954ab1" (UID: "1077fb9f-c6a3-416d-a7c9-011dd8954ab1"). InnerVolumeSpecName "kube-api-access-x5x5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.262245 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1077fb9f-c6a3-416d-a7c9-011dd8954ab1" (UID: "1077fb9f-c6a3-416d-a7c9-011dd8954ab1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.298753 4895 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.299388 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5x5k\" (UniqueName: \"kubernetes.io/projected/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-kube-api-access-x5x5k\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.299547 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1077fb9f-c6a3-416d-a7c9-011dd8954ab1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.384322 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-796fb887fb-dd2s5"] Jan 29 09:01:41 crc kubenswrapper[4895]: W0129 09:01:41.396997 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e7b9632_7a45_48f5_8887_4c79543170fd.slice/crio-a185eeb5292e2aa378490d05697e39c234a93a07dfcd71bf9b689ac40fe58c0d WatchSource:0}: Error finding container a185eeb5292e2aa378490d05697e39c234a93a07dfcd71bf9b689ac40fe58c0d: Status 404 returned error can't find the container with id a185eeb5292e2aa378490d05697e39c234a93a07dfcd71bf9b689ac40fe58c0d Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.654539 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-796fb887fb-dd2s5" event={"ID":"2e7b9632-7a45-48f5-8887-4c79543170fd","Type":"ContainerStarted","Data":"a185eeb5292e2aa378490d05697e39c234a93a07dfcd71bf9b689ac40fe58c0d"} Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.668305 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qd4f9" event={"ID":"1077fb9f-c6a3-416d-a7c9-011dd8954ab1","Type":"ContainerDied","Data":"b1062d78810e03831675a6c32079a45db150ddd7846a5aa758c8461af873b144"} Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.668368 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1062d78810e03831675a6c32079a45db150ddd7846a5aa758c8461af873b144" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.668327 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qd4f9" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.677372 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-4mm74" event={"ID":"03042a97-0311-4d0c-9878-380987ec9407","Type":"ContainerStarted","Data":"e9e654af3592b36d862a254b4f8e02894fb0c64a3069f796d8bb6f59fa0e5d60"} Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.703299 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56569dbddd-srzk5" event={"ID":"d2537c60-3372-4ac4-b801-808c93e9cf6f","Type":"ContainerStarted","Data":"c89d89f0739397f15d0ce3d5e15228dd0148bb6c356a08ddcd3a367add57bd84"} Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.703625 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.703793 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.703861 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.704440 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.705299 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.705578 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.705716 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-4mm74" podStartSLOduration=19.540069162 podStartE2EDuration="27.705689544s" podCreationTimestamp="2026-01-29 09:01:14 +0000 UTC" firstStartedPulling="2026-01-29 09:01:30.115492065 +0000 UTC m=+1231.757000211" lastFinishedPulling="2026-01-29 09:01:38.281112447 +0000 UTC m=+1239.922620593" observedRunningTime="2026-01-29 09:01:41.697826234 +0000 UTC m=+1243.339334380" watchObservedRunningTime="2026-01-29 09:01:41.705689544 +0000 UTC m=+1243.347197690" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.748644 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-56569dbddd-srzk5" podStartSLOduration=3.748612515 podStartE2EDuration="3.748612515s" podCreationTimestamp="2026-01-29 09:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:41.73014637 +0000 UTC m=+1243.371654526" watchObservedRunningTime="2026-01-29 09:01:41.748612515 +0000 UTC m=+1243.390120661" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.901281 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-555f79d94f-q55hl"] Jan 29 09:01:41 crc kubenswrapper[4895]: E0129 09:01:41.902290 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1077fb9f-c6a3-416d-a7c9-011dd8954ab1" containerName="barbican-db-sync" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.902317 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1077fb9f-c6a3-416d-a7c9-011dd8954ab1" containerName="barbican-db-sync" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.902589 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1077fb9f-c6a3-416d-a7c9-011dd8954ab1" containerName="barbican-db-sync" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.903815 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.907413 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.911239 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vmp9g" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.911384 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.918117 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-555f79d94f-q55hl"] Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.965021 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-bb49b7794-577rp"] Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.967580 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:41 crc kubenswrapper[4895]: I0129 09:01:41.971483 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.019152 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rzqv\" (UniqueName: \"kubernetes.io/projected/c403270a-6868-4dec-8340-ac3237f9028e-kube-api-access-4rzqv\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.019532 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c403270a-6868-4dec-8340-ac3237f9028e-config-data\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.019678 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c403270a-6868-4dec-8340-ac3237f9028e-combined-ca-bundle\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.019801 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c403270a-6868-4dec-8340-ac3237f9028e-config-data-custom\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.019953 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c403270a-6868-4dec-8340-ac3237f9028e-logs\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.082034 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-bb49b7794-577rp"] Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.125318 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1d9162f-7759-46d6-bea9-a9975470a1d9-logs\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.125389 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c403270a-6868-4dec-8340-ac3237f9028e-logs\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.125424 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1d9162f-7759-46d6-bea9-a9975470a1d9-config-data\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.125456 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1d9162f-7759-46d6-bea9-a9975470a1d9-config-data-custom\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.125511 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2n5d\" (UniqueName: \"kubernetes.io/projected/c1d9162f-7759-46d6-bea9-a9975470a1d9-kube-api-access-f2n5d\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.125538 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rzqv\" (UniqueName: \"kubernetes.io/projected/c403270a-6868-4dec-8340-ac3237f9028e-kube-api-access-4rzqv\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.125565 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c403270a-6868-4dec-8340-ac3237f9028e-config-data\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.125610 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c403270a-6868-4dec-8340-ac3237f9028e-combined-ca-bundle\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.125646 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c403270a-6868-4dec-8340-ac3237f9028e-config-data-custom\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.125668 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1d9162f-7759-46d6-bea9-a9975470a1d9-combined-ca-bundle\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.126160 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c403270a-6868-4dec-8340-ac3237f9028e-logs\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.138032 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c403270a-6868-4dec-8340-ac3237f9028e-combined-ca-bundle\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.163405 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c403270a-6868-4dec-8340-ac3237f9028e-config-data\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.167529 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c403270a-6868-4dec-8340-ac3237f9028e-config-data-custom\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.182858 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rzqv\" (UniqueName: \"kubernetes.io/projected/c403270a-6868-4dec-8340-ac3237f9028e-kube-api-access-4rzqv\") pod \"barbican-worker-555f79d94f-q55hl\" (UID: \"c403270a-6868-4dec-8340-ac3237f9028e\") " pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.238502 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-l6zd6"] Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.240502 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.243630 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1d9162f-7759-46d6-bea9-a9975470a1d9-logs\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.243696 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1d9162f-7759-46d6-bea9-a9975470a1d9-config-data\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.243725 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1d9162f-7759-46d6-bea9-a9975470a1d9-config-data-custom\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.244308 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2n5d\" (UniqueName: \"kubernetes.io/projected/c1d9162f-7759-46d6-bea9-a9975470a1d9-kube-api-access-f2n5d\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.244457 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1d9162f-7759-46d6-bea9-a9975470a1d9-combined-ca-bundle\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.250551 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1d9162f-7759-46d6-bea9-a9975470a1d9-logs\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.266511 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-555f79d94f-q55hl" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.280759 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-l6zd6"] Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.284786 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1d9162f-7759-46d6-bea9-a9975470a1d9-config-data\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.285603 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1d9162f-7759-46d6-bea9-a9975470a1d9-combined-ca-bundle\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.301335 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6dd99c6f6d-xbc48"] Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.303239 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.323987 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1d9162f-7759-46d6-bea9-a9975470a1d9-config-data-custom\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.325785 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.353739 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2n5d\" (UniqueName: \"kubernetes.io/projected/c1d9162f-7759-46d6-bea9-a9975470a1d9-kube-api-access-f2n5d\") pod \"barbican-keystone-listener-bb49b7794-577rp\" (UID: \"c1d9162f-7759-46d6-bea9-a9975470a1d9\") " pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.364714 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dd99c6f6d-xbc48"] Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.456587 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-combined-ca-bundle\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.456828 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.456866 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.457001 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-config-data-custom\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.457100 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.457125 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5tpp\" (UniqueName: \"kubernetes.io/projected/ce841728-ba54-4e71-923a-23a320afdc78-kube-api-access-m5tpp\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.457170 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpjgb\" (UniqueName: \"kubernetes.io/projected/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-kube-api-access-lpjgb\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.457260 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.457308 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-logs\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.457349 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-config-data\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.457371 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-config\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.561483 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.561565 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-logs\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.561599 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-config-data\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.561623 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-config\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.561651 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-combined-ca-bundle\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.561714 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.561740 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.561788 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-config-data-custom\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.561835 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5tpp\" (UniqueName: \"kubernetes.io/projected/ce841728-ba54-4e71-923a-23a320afdc78-kube-api-access-m5tpp\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.561857 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.561885 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpjgb\" (UniqueName: \"kubernetes.io/projected/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-kube-api-access-lpjgb\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.562581 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.562665 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-logs\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.563191 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.563440 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.571269 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-config-data\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.571329 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-config\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.571592 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.580476 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-config-data-custom\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.586219 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-combined-ca-bundle\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.594443 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bb49b7794-577rp" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.599421 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5tpp\" (UniqueName: \"kubernetes.io/projected/ce841728-ba54-4e71-923a-23a320afdc78-kube-api-access-m5tpp\") pod \"dnsmasq-dns-7c67bffd47-l6zd6\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.624686 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpjgb\" (UniqueName: \"kubernetes.io/projected/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-kube-api-access-lpjgb\") pod \"barbican-api-6dd99c6f6d-xbc48\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.729800 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.740413 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-796fb887fb-dd2s5" event={"ID":"2e7b9632-7a45-48f5-8887-4c79543170fd","Type":"ContainerStarted","Data":"413aa59d0bf4f6f13c884016ac07450636ac15bdd69687a32f2e91876faa028c"} Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.748960 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.922790 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:42 crc kubenswrapper[4895]: I0129 09:01:42.923392 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:01:43 crc kubenswrapper[4895]: I0129 09:01:43.305078 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 09:01:43 crc kubenswrapper[4895]: I0129 09:01:43.754824 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:01:43 crc kubenswrapper[4895]: I0129 09:01:43.754861 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.365015 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5799c46566-89j6v"] Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.367643 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.374427 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.374498 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.385033 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5799c46566-89j6v"] Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.447344 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-config-data-custom\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.447404 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phvpl\" (UniqueName: \"kubernetes.io/projected/dcb59826-4f95-4127-b7fe-f32cd95cad8e-kube-api-access-phvpl\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.447458 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-config-data\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.447516 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb59826-4f95-4127-b7fe-f32cd95cad8e-logs\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.447536 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-internal-tls-certs\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.447552 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-public-tls-certs\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.447589 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-combined-ca-bundle\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.549936 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb59826-4f95-4127-b7fe-f32cd95cad8e-logs\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.550349 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-internal-tls-certs\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.550448 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-public-tls-certs\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.550566 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-combined-ca-bundle\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.550672 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb59826-4f95-4127-b7fe-f32cd95cad8e-logs\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.550723 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-config-data-custom\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.550901 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phvpl\" (UniqueName: \"kubernetes.io/projected/dcb59826-4f95-4127-b7fe-f32cd95cad8e-kube-api-access-phvpl\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.551038 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-config-data\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.558782 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-public-tls-certs\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.559900 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-internal-tls-certs\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.560703 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-config-data-custom\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.560719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-config-data\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.561393 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb59826-4f95-4127-b7fe-f32cd95cad8e-combined-ca-bundle\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.577242 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phvpl\" (UniqueName: \"kubernetes.io/projected/dcb59826-4f95-4127-b7fe-f32cd95cad8e-kube-api-access-phvpl\") pod \"barbican-api-5799c46566-89j6v\" (UID: \"dcb59826-4f95-4127-b7fe-f32cd95cad8e\") " pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.705821 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.831351 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.831521 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:01:45 crc kubenswrapper[4895]: I0129 09:01:45.855422 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 09:01:47 crc kubenswrapper[4895]: I0129 09:01:47.616952 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-95b5h" Jan 29 09:01:47 crc kubenswrapper[4895]: I0129 09:01:47.704894 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75739842-3c97-4e9a-b13b-4fb5929461b8-config\") pod \"75739842-3c97-4e9a-b13b-4fb5929461b8\" (UID: \"75739842-3c97-4e9a-b13b-4fb5929461b8\") " Jan 29 09:01:47 crc kubenswrapper[4895]: I0129 09:01:47.705369 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75739842-3c97-4e9a-b13b-4fb5929461b8-combined-ca-bundle\") pod \"75739842-3c97-4e9a-b13b-4fb5929461b8\" (UID: \"75739842-3c97-4e9a-b13b-4fb5929461b8\") " Jan 29 09:01:47 crc kubenswrapper[4895]: I0129 09:01:47.705546 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjmnj\" (UniqueName: \"kubernetes.io/projected/75739842-3c97-4e9a-b13b-4fb5929461b8-kube-api-access-kjmnj\") pod \"75739842-3c97-4e9a-b13b-4fb5929461b8\" (UID: \"75739842-3c97-4e9a-b13b-4fb5929461b8\") " Jan 29 09:01:47 crc kubenswrapper[4895]: I0129 09:01:47.732631 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75739842-3c97-4e9a-b13b-4fb5929461b8-kube-api-access-kjmnj" (OuterVolumeSpecName: "kube-api-access-kjmnj") pod "75739842-3c97-4e9a-b13b-4fb5929461b8" (UID: "75739842-3c97-4e9a-b13b-4fb5929461b8"). InnerVolumeSpecName "kube-api-access-kjmnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:47 crc kubenswrapper[4895]: I0129 09:01:47.756208 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75739842-3c97-4e9a-b13b-4fb5929461b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75739842-3c97-4e9a-b13b-4fb5929461b8" (UID: "75739842-3c97-4e9a-b13b-4fb5929461b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:47 crc kubenswrapper[4895]: I0129 09:01:47.808839 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjmnj\" (UniqueName: \"kubernetes.io/projected/75739842-3c97-4e9a-b13b-4fb5929461b8-kube-api-access-kjmnj\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:47 crc kubenswrapper[4895]: I0129 09:01:47.808890 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75739842-3c97-4e9a-b13b-4fb5929461b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:47 crc kubenswrapper[4895]: I0129 09:01:47.821319 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-95b5h" event={"ID":"75739842-3c97-4e9a-b13b-4fb5929461b8","Type":"ContainerDied","Data":"5b720908c08c6d3ed0e1bc1215b4a5f9780d9802dbaeb1d9a8beb3de1722b740"} Jan 29 09:01:47 crc kubenswrapper[4895]: I0129 09:01:47.821373 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b720908c08c6d3ed0e1bc1215b4a5f9780d9802dbaeb1d9a8beb3de1722b740" Jan 29 09:01:47 crc kubenswrapper[4895]: I0129 09:01:47.821449 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-95b5h" Jan 29 09:01:47 crc kubenswrapper[4895]: I0129 09:01:47.823187 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75739842-3c97-4e9a-b13b-4fb5929461b8-config" (OuterVolumeSpecName: "config") pod "75739842-3c97-4e9a-b13b-4fb5929461b8" (UID: "75739842-3c97-4e9a-b13b-4fb5929461b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:47 crc kubenswrapper[4895]: I0129 09:01:47.911090 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/75739842-3c97-4e9a-b13b-4fb5929461b8-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.006645 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dd99c6f6d-xbc48"] Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.160197 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5799c46566-89j6v"] Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.177481 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-555f79d94f-q55hl"] Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.303537 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-bb49b7794-577rp"] Jan 29 09:01:48 crc kubenswrapper[4895]: W0129 09:01:48.433400 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce841728_ba54_4e71_923a_23a320afdc78.slice/crio-398200eaa80c235d12b75078b743708eff8a7aa8ffed2e8383b64e69fbc99b82 WatchSource:0}: Error finding container 398200eaa80c235d12b75078b743708eff8a7aa8ffed2e8383b64e69fbc99b82: Status 404 returned error can't find the container with id 398200eaa80c235d12b75078b743708eff8a7aa8ffed2e8383b64e69fbc99b82 Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.436589 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-l6zd6"] Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.852414 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-l6zd6"] Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.906170 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dd99c6f6d-xbc48" event={"ID":"e93af0a8-f847-4968-8e6c-8433e4f0e4c0","Type":"ContainerStarted","Data":"f962fa57f05a65dc6d267b151d18d888e84c7595521c08c9d8146df33c02281c"} Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.906251 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dd99c6f6d-xbc48" event={"ID":"e93af0a8-f847-4968-8e6c-8433e4f0e4c0","Type":"ContainerStarted","Data":"115de8d75c349a8a406fcc38b1e95144ae10cb7a4ee5af5b1534c6408250e4a1"} Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.906266 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dd99c6f6d-xbc48" event={"ID":"e93af0a8-f847-4968-8e6c-8433e4f0e4c0","Type":"ContainerStarted","Data":"97d26199b60ec6246460cddc1e91f7f23b2a3d78701105d9cf0567efeaede9d3"} Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.909201 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.909271 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.913961 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee","Type":"ContainerStarted","Data":"cf4e6d2a5e74c27ec2dc66f6342b9d0c68f7ecfd4bcbf92b92ceff0a171d25cc"} Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.916313 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xqv89" event={"ID":"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7","Type":"ContainerStarted","Data":"19b8c230279ace9dfabd0515cf5fd30bc53be821c268523cabaef2c5c8f92a38"} Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.924281 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-x7jwl"] Jan 29 09:01:48 crc kubenswrapper[4895]: E0129 09:01:48.924868 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75739842-3c97-4e9a-b13b-4fb5929461b8" containerName="neutron-db-sync" Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.924894 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="75739842-3c97-4e9a-b13b-4fb5929461b8" containerName="neutron-db-sync" Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.925210 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="75739842-3c97-4e9a-b13b-4fb5929461b8" containerName="neutron-db-sync" Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.935185 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.935238 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5799c46566-89j6v" event={"ID":"dcb59826-4f95-4127-b7fe-f32cd95cad8e","Type":"ContainerStarted","Data":"b9e80a52cac84869e6c8bb2d1d2ee7f116d36715f0e2402b4fa97571f1c1b88b"} Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.935267 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.935280 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5799c46566-89j6v" event={"ID":"dcb59826-4f95-4127-b7fe-f32cd95cad8e","Type":"ContainerStarted","Data":"3245d087641c342580ba96533c0a848fd6fd6c4a7394366ba66450d56ba682f1"} Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.935292 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5799c46566-89j6v" event={"ID":"dcb59826-4f95-4127-b7fe-f32cd95cad8e","Type":"ContainerStarted","Data":"7eb7f423d6ed8817b5fc1e87b51b891224b0bbe292d3c224aea8b6db01f5454c"} Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.935435 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.968982 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-796fb887fb-dd2s5" event={"ID":"2e7b9632-7a45-48f5-8887-4c79543170fd","Type":"ContainerStarted","Data":"fa21cdaa58033b247bcbef3eb8fc217aa754757511ab0bdb32b43fa5c1c77453"} Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.969315 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.970044 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.970073 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-x7jwl"] Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.983775 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bb49b7794-577rp" event={"ID":"c1d9162f-7759-46d6-bea9-a9975470a1d9","Type":"ContainerStarted","Data":"c1c1f85507bf7aefb6498c31c4e2969fa8ec3ea1b2815e2b0470777124dc6d9f"} Jan 29 09:01:48 crc kubenswrapper[4895]: I0129 09:01:48.999441 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-555f79d94f-q55hl" event={"ID":"c403270a-6868-4dec-8340-ac3237f9028e","Type":"ContainerStarted","Data":"b8bc219659dee364334e2376e4577d65e22bbf6b9cdd3999eb94a64ca12c8454"} Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.001649 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" event={"ID":"ce841728-ba54-4e71-923a-23a320afdc78","Type":"ContainerStarted","Data":"398200eaa80c235d12b75078b743708eff8a7aa8ffed2e8383b64e69fbc99b82"} Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.006759 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6dd99c6f6d-xbc48" podStartSLOduration=7.006728558 podStartE2EDuration="7.006728558s" podCreationTimestamp="2026-01-29 09:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:48.950228073 +0000 UTC m=+1250.591736259" watchObservedRunningTime="2026-01-29 09:01:49.006728558 +0000 UTC m=+1250.648236704" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.059883 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.060088 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.060154 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9s4b\" (UniqueName: \"kubernetes.io/projected/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-kube-api-access-z9s4b\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.060179 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-config\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.060212 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.060298 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.066337 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-xqv89" podStartSLOduration=4.06202603 podStartE2EDuration="45.066304374s" podCreationTimestamp="2026-01-29 09:01:04 +0000 UTC" firstStartedPulling="2026-01-29 09:01:06.736605506 +0000 UTC m=+1208.378113652" lastFinishedPulling="2026-01-29 09:01:47.74088385 +0000 UTC m=+1249.382391996" observedRunningTime="2026-01-29 09:01:49.017470465 +0000 UTC m=+1250.658978611" watchObservedRunningTime="2026-01-29 09:01:49.066304374 +0000 UTC m=+1250.707812520" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.106736 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c686984cb-9nzt7"] Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.110161 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.114479 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-765t2" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.117808 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.118118 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.118587 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.122868 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c686984cb-9nzt7"] Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.123513 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5799c46566-89j6v" podStartSLOduration=4.123498827 podStartE2EDuration="4.123498827s" podCreationTimestamp="2026-01-29 09:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:49.058034142 +0000 UTC m=+1250.699542288" watchObservedRunningTime="2026-01-29 09:01:49.123498827 +0000 UTC m=+1250.765006973" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.152632 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-796fb887fb-dd2s5" podStartSLOduration=9.152561426 podStartE2EDuration="9.152561426s" podCreationTimestamp="2026-01-29 09:01:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:49.090666088 +0000 UTC m=+1250.732174234" watchObservedRunningTime="2026-01-29 09:01:49.152561426 +0000 UTC m=+1250.794069572" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.163209 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.163266 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.163333 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-httpd-config\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.163358 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-combined-ca-bundle\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.163385 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-ovndb-tls-certs\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.163435 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.163485 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9s4b\" (UniqueName: \"kubernetes.io/projected/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-kube-api-access-z9s4b\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.163505 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-config\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.163529 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.163557 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-config\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.163585 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqtgz\" (UniqueName: \"kubernetes.io/projected/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-kube-api-access-gqtgz\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.164996 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-config\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.164990 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.165136 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.165593 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.168970 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.195748 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9s4b\" (UniqueName: \"kubernetes.io/projected/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-kube-api-access-z9s4b\") pod \"dnsmasq-dns-848cf88cfc-x7jwl\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.266107 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqtgz\" (UniqueName: \"kubernetes.io/projected/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-kube-api-access-gqtgz\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.266312 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-httpd-config\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.266371 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-combined-ca-bundle\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.266410 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-ovndb-tls-certs\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.266553 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-config\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.273518 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-config\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.274602 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-ovndb-tls-certs\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.277904 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.278488 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-combined-ca-bundle\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.294019 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-httpd-config\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.350699 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqtgz\" (UniqueName: \"kubernetes.io/projected/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-kube-api-access-gqtgz\") pod \"neutron-6c686984cb-9nzt7\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: I0129 09:01:49.491154 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:49 crc kubenswrapper[4895]: E0129 09:01:49.994500 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode93af0a8_f847_4968_8e6c_8433e4f0e4c0.slice/crio-f962fa57f05a65dc6d267b151d18d888e84c7595521c08c9d8146df33c02281c.scope\": RecentStats: unable to find data in memory cache]" Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.056900 4895 generic.go:334] "Generic (PLEG): container finished" podID="ce841728-ba54-4e71-923a-23a320afdc78" containerID="70f2041d1824737bfe645d568de498ff891c0ab8d0d9d74805c229dc4cbee274" exitCode=0 Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.056996 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" event={"ID":"ce841728-ba54-4e71-923a-23a320afdc78","Type":"ContainerDied","Data":"70f2041d1824737bfe645d568de498ff891c0ab8d0d9d74805c229dc4cbee274"} Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.076748 4895 generic.go:334] "Generic (PLEG): container finished" podID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerID="f962fa57f05a65dc6d267b151d18d888e84c7595521c08c9d8146df33c02281c" exitCode=1 Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.080250 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dd99c6f6d-xbc48" event={"ID":"e93af0a8-f847-4968-8e6c-8433e4f0e4c0","Type":"ContainerDied","Data":"f962fa57f05a65dc6d267b151d18d888e84c7595521c08c9d8146df33c02281c"} Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.083264 4895 scope.go:117] "RemoveContainer" containerID="f962fa57f05a65dc6d267b151d18d888e84c7595521c08c9d8146df33c02281c" Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.401608 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c686984cb-9nzt7"] Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.592140 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-x7jwl"] Jan 29 09:01:50 crc kubenswrapper[4895]: W0129 09:01:50.626978 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf4d1b85_39a1_46ad_8a21_60a165dbbf6d.slice/crio-27d7332e96d9fef1bb1670210e6e7c6c47e9d68c1cf319dced45eb71c8833bdc WatchSource:0}: Error finding container 27d7332e96d9fef1bb1670210e6e7c6c47e9d68c1cf319dced45eb71c8833bdc: Status 404 returned error can't find the container with id 27d7332e96d9fef1bb1670210e6e7c6c47e9d68c1cf319dced45eb71c8833bdc Jan 29 09:01:50 crc kubenswrapper[4895]: W0129 09:01:50.652934 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc62414b9_e9f1_4a5c_8448_09565e6fd3e8.slice/crio-a691716ff38de0b18c63d45e1e1b25c533fb87d2e9fadde7886af87f6725648d WatchSource:0}: Error finding container a691716ff38de0b18c63d45e1e1b25c533fb87d2e9fadde7886af87f6725648d: Status 404 returned error can't find the container with id a691716ff38de0b18c63d45e1e1b25c533fb87d2e9fadde7886af87f6725648d Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.841978 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.955815 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-ovsdbserver-nb\") pod \"ce841728-ba54-4e71-923a-23a320afdc78\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.956041 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-dns-svc\") pod \"ce841728-ba54-4e71-923a-23a320afdc78\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.956079 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-ovsdbserver-sb\") pod \"ce841728-ba54-4e71-923a-23a320afdc78\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.956117 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-config\") pod \"ce841728-ba54-4e71-923a-23a320afdc78\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.956216 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-dns-swift-storage-0\") pod \"ce841728-ba54-4e71-923a-23a320afdc78\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.956249 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5tpp\" (UniqueName: \"kubernetes.io/projected/ce841728-ba54-4e71-923a-23a320afdc78-kube-api-access-m5tpp\") pod \"ce841728-ba54-4e71-923a-23a320afdc78\" (UID: \"ce841728-ba54-4e71-923a-23a320afdc78\") " Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.969786 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce841728-ba54-4e71-923a-23a320afdc78-kube-api-access-m5tpp" (OuterVolumeSpecName: "kube-api-access-m5tpp") pod "ce841728-ba54-4e71-923a-23a320afdc78" (UID: "ce841728-ba54-4e71-923a-23a320afdc78"). InnerVolumeSpecName "kube-api-access-m5tpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:50 crc kubenswrapper[4895]: I0129 09:01:50.989357 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce841728-ba54-4e71-923a-23a320afdc78" (UID: "ce841728-ba54-4e71-923a-23a320afdc78"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.003771 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce841728-ba54-4e71-923a-23a320afdc78" (UID: "ce841728-ba54-4e71-923a-23a320afdc78"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.003829 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-config" (OuterVolumeSpecName: "config") pod "ce841728-ba54-4e71-923a-23a320afdc78" (UID: "ce841728-ba54-4e71-923a-23a320afdc78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.004536 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ce841728-ba54-4e71-923a-23a320afdc78" (UID: "ce841728-ba54-4e71-923a-23a320afdc78"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.058042 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.058070 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.058081 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5tpp\" (UniqueName: \"kubernetes.io/projected/ce841728-ba54-4e71-923a-23a320afdc78-kube-api-access-m5tpp\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.058089 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.058098 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.080830 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce841728-ba54-4e71-923a-23a320afdc78" (UID: "ce841728-ba54-4e71-923a-23a320afdc78"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.126056 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c686984cb-9nzt7" event={"ID":"df4d1b85-39a1-46ad-8a21-60a165dbbf6d","Type":"ContainerStarted","Data":"396eda2b3209074288f3195193f5775a3b1ffa1e6e4ee4854b7f6fad7a771887"} Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.126123 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c686984cb-9nzt7" event={"ID":"df4d1b85-39a1-46ad-8a21-60a165dbbf6d","Type":"ContainerStarted","Data":"27d7332e96d9fef1bb1670210e6e7c6c47e9d68c1cf319dced45eb71c8833bdc"} Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.130149 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" event={"ID":"c62414b9-e9f1-4a5c-8448-09565e6fd3e8","Type":"ContainerStarted","Data":"a691716ff38de0b18c63d45e1e1b25c533fb87d2e9fadde7886af87f6725648d"} Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.132170 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" event={"ID":"ce841728-ba54-4e71-923a-23a320afdc78","Type":"ContainerDied","Data":"398200eaa80c235d12b75078b743708eff8a7aa8ffed2e8383b64e69fbc99b82"} Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.132211 4895 scope.go:117] "RemoveContainer" containerID="70f2041d1824737bfe645d568de498ff891c0ab8d0d9d74805c229dc4cbee274" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.132527 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-l6zd6" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.154456 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dd99c6f6d-xbc48" event={"ID":"e93af0a8-f847-4968-8e6c-8433e4f0e4c0","Type":"ContainerStarted","Data":"3e9ac2ac65d2f1c1212467eb31a281878a58d1087abb71599cd9334b54b7254f"} Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.154773 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.159897 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce841728-ba54-4e71-923a-23a320afdc78-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.260125 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-l6zd6"] Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.303044 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-l6zd6"] Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.567188 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.800723 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6b548b4f8c-kc92t"] Jan 29 09:01:51 crc kubenswrapper[4895]: E0129 09:01:51.803154 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce841728-ba54-4e71-923a-23a320afdc78" containerName="init" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.803251 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce841728-ba54-4e71-923a-23a320afdc78" containerName="init" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.803849 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce841728-ba54-4e71-923a-23a320afdc78" containerName="init" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.817371 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.821652 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.821869 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.839539 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b548b4f8c-kc92t"] Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.885340 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-internal-tls-certs\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.885809 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-ovndb-tls-certs\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.885996 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-httpd-config\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.886100 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-public-tls-certs\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.886246 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-combined-ca-bundle\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.886511 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-config\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.886811 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gggww\" (UniqueName: \"kubernetes.io/projected/657c2688-8379-4121-a64a-89c1fd428b57-kube-api-access-gggww\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.989348 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-combined-ca-bundle\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.989861 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-config\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.990087 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gggww\" (UniqueName: \"kubernetes.io/projected/657c2688-8379-4121-a64a-89c1fd428b57-kube-api-access-gggww\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.990246 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-internal-tls-certs\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.990419 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-ovndb-tls-certs\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.990570 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-httpd-config\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:51 crc kubenswrapper[4895]: I0129 09:01:51.990690 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-public-tls-certs\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:52 crc kubenswrapper[4895]: I0129 09:01:52.000227 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-httpd-config\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:52 crc kubenswrapper[4895]: I0129 09:01:52.004828 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-internal-tls-certs\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:52 crc kubenswrapper[4895]: I0129 09:01:52.005112 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-combined-ca-bundle\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:52 crc kubenswrapper[4895]: I0129 09:01:52.006648 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-public-tls-certs\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:52 crc kubenswrapper[4895]: I0129 09:01:52.012572 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-ovndb-tls-certs\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:52 crc kubenswrapper[4895]: I0129 09:01:52.019694 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gggww\" (UniqueName: \"kubernetes.io/projected/657c2688-8379-4121-a64a-89c1fd428b57-kube-api-access-gggww\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:52 crc kubenswrapper[4895]: I0129 09:01:52.024021 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/657c2688-8379-4121-a64a-89c1fd428b57-config\") pod \"neutron-6b548b4f8c-kc92t\" (UID: \"657c2688-8379-4121-a64a-89c1fd428b57\") " pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:52 crc kubenswrapper[4895]: I0129 09:01:52.138815 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:52 crc kubenswrapper[4895]: I0129 09:01:52.177907 4895 generic.go:334] "Generic (PLEG): container finished" podID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerID="3e9ac2ac65d2f1c1212467eb31a281878a58d1087abb71599cd9334b54b7254f" exitCode=1 Jan 29 09:01:52 crc kubenswrapper[4895]: I0129 09:01:52.178191 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dd99c6f6d-xbc48" event={"ID":"e93af0a8-f847-4968-8e6c-8433e4f0e4c0","Type":"ContainerDied","Data":"3e9ac2ac65d2f1c1212467eb31a281878a58d1087abb71599cd9334b54b7254f"} Jan 29 09:01:52 crc kubenswrapper[4895]: I0129 09:01:52.179497 4895 scope.go:117] "RemoveContainer" containerID="3e9ac2ac65d2f1c1212467eb31a281878a58d1087abb71599cd9334b54b7254f" Jan 29 09:01:52 crc kubenswrapper[4895]: E0129 09:01:52.179780 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-api pod=barbican-api-6dd99c6f6d-xbc48_openstack(e93af0a8-f847-4968-8e6c-8433e4f0e4c0)\"" pod="openstack/barbican-api-6dd99c6f6d-xbc48" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" Jan 29 09:01:52 crc kubenswrapper[4895]: I0129 09:01:52.184264 4895 generic.go:334] "Generic (PLEG): container finished" podID="c62414b9-e9f1-4a5c-8448-09565e6fd3e8" containerID="6cf876b9ccb15f27dca5a8f6d6173d11876af82e66557a745c2b5a7206f2923b" exitCode=0 Jan 29 09:01:52 crc kubenswrapper[4895]: I0129 09:01:52.184368 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" event={"ID":"c62414b9-e9f1-4a5c-8448-09565e6fd3e8","Type":"ContainerDied","Data":"6cf876b9ccb15f27dca5a8f6d6173d11876af82e66557a745c2b5a7206f2923b"} Jan 29 09:01:52 crc kubenswrapper[4895]: I0129 09:01:52.259635 4895 scope.go:117] "RemoveContainer" containerID="f962fa57f05a65dc6d267b151d18d888e84c7595521c08c9d8146df33c02281c" Jan 29 09:01:53 crc kubenswrapper[4895]: I0129 09:01:53.210354 4895 scope.go:117] "RemoveContainer" containerID="3e9ac2ac65d2f1c1212467eb31a281878a58d1087abb71599cd9334b54b7254f" Jan 29 09:01:53 crc kubenswrapper[4895]: E0129 09:01:53.211185 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-api pod=barbican-api-6dd99c6f6d-xbc48_openstack(e93af0a8-f847-4968-8e6c-8433e4f0e4c0)\"" pod="openstack/barbican-api-6dd99c6f6d-xbc48" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" Jan 29 09:01:53 crc kubenswrapper[4895]: I0129 09:01:53.221075 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dd99c6f6d-xbc48" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": dial tcp 10.217.0.157:9311: connect: connection refused" Jan 29 09:01:53 crc kubenswrapper[4895]: I0129 09:01:53.270383 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce841728-ba54-4e71-923a-23a320afdc78" path="/var/lib/kubelet/pods/ce841728-ba54-4e71-923a-23a320afdc78/volumes" Jan 29 09:01:53 crc kubenswrapper[4895]: I0129 09:01:53.532040 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b548b4f8c-kc92t"] Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.284577 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" event={"ID":"c62414b9-e9f1-4a5c-8448-09565e6fd3e8","Type":"ContainerStarted","Data":"5f6011f07a38b30411edc845dc463413e2f781c4866e0fe214c028b16096a181"} Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.284966 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.289124 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c686984cb-9nzt7" event={"ID":"df4d1b85-39a1-46ad-8a21-60a165dbbf6d","Type":"ContainerStarted","Data":"37d6efe9ff64dc67d137d7446d909566f2a8b08d88bbbc5bfa19e50d85ce14ed"} Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.289431 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.296993 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bb49b7794-577rp" event={"ID":"c1d9162f-7759-46d6-bea9-a9975470a1d9","Type":"ContainerStarted","Data":"a66996b5d2661c95c10453d049e858c5e25a8cdcf82efb5df76fdd100e673dd7"} Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.297082 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bb49b7794-577rp" event={"ID":"c1d9162f-7759-46d6-bea9-a9975470a1d9","Type":"ContainerStarted","Data":"fe914fb61889e2618907ac82a3c879f0c0e45f834417ee7013a12203e886d90a"} Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.313555 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-555f79d94f-q55hl" event={"ID":"c403270a-6868-4dec-8340-ac3237f9028e","Type":"ContainerStarted","Data":"8ab23277e77c13caaa72323676f3ec86c1e42d4293418de8f9d4936d88841f6f"} Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.313611 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-555f79d94f-q55hl" event={"ID":"c403270a-6868-4dec-8340-ac3237f9028e","Type":"ContainerStarted","Data":"3a3730c99c24f7cb962b5eb90ea2727ec30af8beaa4ed39b4753250867b7de00"} Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.319668 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" podStartSLOduration=6.319638666 podStartE2EDuration="6.319638666s" podCreationTimestamp="2026-01-29 09:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:54.306603847 +0000 UTC m=+1255.948112003" watchObservedRunningTime="2026-01-29 09:01:54.319638666 +0000 UTC m=+1255.961146812" Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.324234 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b548b4f8c-kc92t" event={"ID":"657c2688-8379-4121-a64a-89c1fd428b57","Type":"ContainerStarted","Data":"60244eebc0e2176c7887cee5ed6df1def1732ab4a63501c31c376a868a42cc27"} Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.324305 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b548b4f8c-kc92t" event={"ID":"657c2688-8379-4121-a64a-89c1fd428b57","Type":"ContainerStarted","Data":"ac309f0a505910412de0c8a08f528bc57299873a2156558f4f060d0936752669"} Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.333734 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-bb49b7794-577rp" podStartSLOduration=8.77806137 podStartE2EDuration="13.333709723s" podCreationTimestamp="2026-01-29 09:01:41 +0000 UTC" firstStartedPulling="2026-01-29 09:01:48.321178723 +0000 UTC m=+1249.962686869" lastFinishedPulling="2026-01-29 09:01:52.876827076 +0000 UTC m=+1254.518335222" observedRunningTime="2026-01-29 09:01:54.329218543 +0000 UTC m=+1255.970726689" watchObservedRunningTime="2026-01-29 09:01:54.333709723 +0000 UTC m=+1255.975217869" Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.386054 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c686984cb-9nzt7" podStartSLOduration=5.385951954 podStartE2EDuration="5.385951954s" podCreationTimestamp="2026-01-29 09:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:54.359582267 +0000 UTC m=+1256.001090423" watchObservedRunningTime="2026-01-29 09:01:54.385951954 +0000 UTC m=+1256.027460100" Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.388365 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-555f79d94f-q55hl" podStartSLOduration=8.695330953 podStartE2EDuration="13.388356888s" podCreationTimestamp="2026-01-29 09:01:41 +0000 UTC" firstStartedPulling="2026-01-29 09:01:48.182769173 +0000 UTC m=+1249.824277319" lastFinishedPulling="2026-01-29 09:01:52.875795108 +0000 UTC m=+1254.517303254" observedRunningTime="2026-01-29 09:01:54.385553923 +0000 UTC m=+1256.027062079" watchObservedRunningTime="2026-01-29 09:01:54.388356888 +0000 UTC m=+1256.029865034" Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.750818 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.751315 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6dd99c6f6d-xbc48" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": dial tcp 10.217.0.157:9311: connect: connection refused" Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.752060 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dd99c6f6d-xbc48" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": dial tcp 10.217.0.157:9311: connect: connection refused" Jan 29 09:01:54 crc kubenswrapper[4895]: I0129 09:01:54.752597 4895 scope.go:117] "RemoveContainer" containerID="3e9ac2ac65d2f1c1212467eb31a281878a58d1087abb71599cd9334b54b7254f" Jan 29 09:01:54 crc kubenswrapper[4895]: E0129 09:01:54.752874 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-api pod=barbican-api-6dd99c6f6d-xbc48_openstack(e93af0a8-f847-4968-8e6c-8433e4f0e4c0)\"" pod="openstack/barbican-api-6dd99c6f6d-xbc48" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" Jan 29 09:01:55 crc kubenswrapper[4895]: I0129 09:01:55.344042 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b548b4f8c-kc92t" event={"ID":"657c2688-8379-4121-a64a-89c1fd428b57","Type":"ContainerStarted","Data":"58e6547ba949114ab7b2ba562c43f2646b3f97c9f96a276622fa436dd2238ab7"} Jan 29 09:01:55 crc kubenswrapper[4895]: I0129 09:01:55.393669 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6b548b4f8c-kc92t" podStartSLOduration=4.393639582 podStartE2EDuration="4.393639582s" podCreationTimestamp="2026-01-29 09:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:55.370097951 +0000 UTC m=+1257.011606107" watchObservedRunningTime="2026-01-29 09:01:55.393639582 +0000 UTC m=+1257.035147728" Jan 29 09:01:56 crc kubenswrapper[4895]: I0129 09:01:56.355244 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:01:57 crc kubenswrapper[4895]: I0129 09:01:57.369569 4895 generic.go:334] "Generic (PLEG): container finished" podID="03042a97-0311-4d0c-9878-380987ec9407" containerID="e9e654af3592b36d862a254b4f8e02894fb0c64a3069f796d8bb6f59fa0e5d60" exitCode=0 Jan 29 09:01:57 crc kubenswrapper[4895]: I0129 09:01:57.369643 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-4mm74" event={"ID":"03042a97-0311-4d0c-9878-380987ec9407","Type":"ContainerDied","Data":"e9e654af3592b36d862a254b4f8e02894fb0c64a3069f796d8bb6f59fa0e5d60"} Jan 29 09:01:57 crc kubenswrapper[4895]: I0129 09:01:57.373211 4895 generic.go:334] "Generic (PLEG): container finished" podID="d3fc9317-29e0-4b1c-b598-5d95fc98a1e7" containerID="19b8c230279ace9dfabd0515cf5fd30bc53be821c268523cabaef2c5c8f92a38" exitCode=0 Jan 29 09:01:57 crc kubenswrapper[4895]: I0129 09:01:57.373301 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xqv89" event={"ID":"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7","Type":"ContainerDied","Data":"19b8c230279ace9dfabd0515cf5fd30bc53be821c268523cabaef2c5c8f92a38"} Jan 29 09:01:57 crc kubenswrapper[4895]: I0129 09:01:57.508809 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:57 crc kubenswrapper[4895]: I0129 09:01:57.751181 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6dd99c6f6d-xbc48" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": dial tcp 10.217.0.157:9311: connect: connection refused" Jan 29 09:01:57 crc kubenswrapper[4895]: I0129 09:01:57.751214 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dd99c6f6d-xbc48" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": dial tcp 10.217.0.157:9311: connect: connection refused" Jan 29 09:01:57 crc kubenswrapper[4895]: I0129 09:01:57.966410 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5799c46566-89j6v" Jan 29 09:01:58 crc kubenswrapper[4895]: I0129 09:01:58.049231 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6dd99c6f6d-xbc48"] Jan 29 09:01:58 crc kubenswrapper[4895]: I0129 09:01:58.049534 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6dd99c6f6d-xbc48" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api-log" containerID="cri-o://115de8d75c349a8a406fcc38b1e95144ae10cb7a4ee5af5b1534c6408250e4a1" gracePeriod=30 Jan 29 09:01:58 crc kubenswrapper[4895]: I0129 09:01:58.053560 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dd99c6f6d-xbc48" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": dial tcp 10.217.0.157:9311: connect: connection refused" Jan 29 09:01:58 crc kubenswrapper[4895]: I0129 09:01:58.386756 4895 generic.go:334] "Generic (PLEG): container finished" podID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerID="115de8d75c349a8a406fcc38b1e95144ae10cb7a4ee5af5b1534c6408250e4a1" exitCode=143 Jan 29 09:01:58 crc kubenswrapper[4895]: I0129 09:01:58.386945 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dd99c6f6d-xbc48" event={"ID":"e93af0a8-f847-4968-8e6c-8433e4f0e4c0","Type":"ContainerDied","Data":"115de8d75c349a8a406fcc38b1e95144ae10cb7a4ee5af5b1534c6408250e4a1"} Jan 29 09:01:59 crc kubenswrapper[4895]: I0129 09:01:59.284157 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:01:59 crc kubenswrapper[4895]: I0129 09:01:59.396483 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zm6xl"] Jan 29 09:01:59 crc kubenswrapper[4895]: I0129 09:01:59.396771 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" podUID="188f6bfb-7531-44aa-890b-6658e39aa184" containerName="dnsmasq-dns" containerID="cri-o://f392274ee669920d8a90e3d21da83c3dcd76a174babb73fe75efa8daed63bdee" gracePeriod=10 Jan 29 09:02:00 crc kubenswrapper[4895]: I0129 09:02:00.415068 4895 generic.go:334] "Generic (PLEG): container finished" podID="188f6bfb-7531-44aa-890b-6658e39aa184" containerID="f392274ee669920d8a90e3d21da83c3dcd76a174babb73fe75efa8daed63bdee" exitCode=0 Jan 29 09:02:00 crc kubenswrapper[4895]: I0129 09:02:00.415512 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" event={"ID":"188f6bfb-7531-44aa-890b-6658e39aa184","Type":"ContainerDied","Data":"f392274ee669920d8a90e3d21da83c3dcd76a174babb73fe75efa8daed63bdee"} Jan 29 09:02:00 crc kubenswrapper[4895]: I0129 09:02:00.645295 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" podUID="188f6bfb-7531-44aa-890b-6658e39aa184" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.256946 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-4mm74" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.377515 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/03042a97-0311-4d0c-9878-380987ec9407-config-data-merged\") pod \"03042a97-0311-4d0c-9878-380987ec9407\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.377635 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-config-data\") pod \"03042a97-0311-4d0c-9878-380987ec9407\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.377715 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/03042a97-0311-4d0c-9878-380987ec9407-etc-podinfo\") pod \"03042a97-0311-4d0c-9878-380987ec9407\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.377740 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pqdt\" (UniqueName: \"kubernetes.io/projected/03042a97-0311-4d0c-9878-380987ec9407-kube-api-access-5pqdt\") pod \"03042a97-0311-4d0c-9878-380987ec9407\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.377905 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-scripts\") pod \"03042a97-0311-4d0c-9878-380987ec9407\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.378024 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-combined-ca-bundle\") pod \"03042a97-0311-4d0c-9878-380987ec9407\" (UID: \"03042a97-0311-4d0c-9878-380987ec9407\") " Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.379136 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03042a97-0311-4d0c-9878-380987ec9407-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "03042a97-0311-4d0c-9878-380987ec9407" (UID: "03042a97-0311-4d0c-9878-380987ec9407"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.385907 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-scripts" (OuterVolumeSpecName: "scripts") pod "03042a97-0311-4d0c-9878-380987ec9407" (UID: "03042a97-0311-4d0c-9878-380987ec9407"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.385964 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03042a97-0311-4d0c-9878-380987ec9407-kube-api-access-5pqdt" (OuterVolumeSpecName: "kube-api-access-5pqdt") pod "03042a97-0311-4d0c-9878-380987ec9407" (UID: "03042a97-0311-4d0c-9878-380987ec9407"). InnerVolumeSpecName "kube-api-access-5pqdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.393441 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/03042a97-0311-4d0c-9878-380987ec9407-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "03042a97-0311-4d0c-9878-380987ec9407" (UID: "03042a97-0311-4d0c-9878-380987ec9407"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.428660 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-config-data" (OuterVolumeSpecName: "config-data") pod "03042a97-0311-4d0c-9878-380987ec9407" (UID: "03042a97-0311-4d0c-9878-380987ec9407"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.431517 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-4mm74" event={"ID":"03042a97-0311-4d0c-9878-380987ec9407","Type":"ContainerDied","Data":"1403361c1b2d10be87375e74e9c05a0dd1f65671d7353b8b08125b89702787b6"} Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.431602 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1403361c1b2d10be87375e74e9c05a0dd1f65671d7353b8b08125b89702787b6" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.431778 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-4mm74" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.448609 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03042a97-0311-4d0c-9878-380987ec9407" (UID: "03042a97-0311-4d0c-9878-380987ec9407"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.480891 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/03042a97-0311-4d0c-9878-380987ec9407-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.481283 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.481676 4895 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/03042a97-0311-4d0c-9878-380987ec9407-etc-podinfo\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.481757 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pqdt\" (UniqueName: \"kubernetes.io/projected/03042a97-0311-4d0c-9878-380987ec9407-kube-api-access-5pqdt\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.481844 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.481903 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03042a97-0311-4d0c-9878-380987ec9407-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.778225 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xqv89" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.890096 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-combined-ca-bundle\") pod \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.890128 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-db-sync-config-data\") pod \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.890243 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-etc-machine-id\") pod \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.890279 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5nz8\" (UniqueName: \"kubernetes.io/projected/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-kube-api-access-d5nz8\") pod \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.890332 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-config-data\") pod \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.890471 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-scripts\") pod \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\" (UID: \"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7\") " Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.893065 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d3fc9317-29e0-4b1c-b598-5d95fc98a1e7" (UID: "d3fc9317-29e0-4b1c-b598-5d95fc98a1e7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.898665 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-kube-api-access-d5nz8" (OuterVolumeSpecName: "kube-api-access-d5nz8") pod "d3fc9317-29e0-4b1c-b598-5d95fc98a1e7" (UID: "d3fc9317-29e0-4b1c-b598-5d95fc98a1e7"). InnerVolumeSpecName "kube-api-access-d5nz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.900180 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-scripts" (OuterVolumeSpecName: "scripts") pod "d3fc9317-29e0-4b1c-b598-5d95fc98a1e7" (UID: "d3fc9317-29e0-4b1c-b598-5d95fc98a1e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.902053 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d3fc9317-29e0-4b1c-b598-5d95fc98a1e7" (UID: "d3fc9317-29e0-4b1c-b598-5d95fc98a1e7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.931938 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3fc9317-29e0-4b1c-b598-5d95fc98a1e7" (UID: "d3fc9317-29e0-4b1c-b598-5d95fc98a1e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.979970 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-config-data" (OuterVolumeSpecName: "config-data") pod "d3fc9317-29e0-4b1c-b598-5d95fc98a1e7" (UID: "d3fc9317-29e0-4b1c-b598-5d95fc98a1e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.995910 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.995995 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5nz8\" (UniqueName: \"kubernetes.io/projected/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-kube-api-access-d5nz8\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.996015 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.996025 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.996038 4895 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:01 crc kubenswrapper[4895]: I0129 09:02:01.996047 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.338880 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.347158 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.408744 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-dns-svc\") pod \"188f6bfb-7531-44aa-890b-6658e39aa184\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.408977 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-dns-swift-storage-0\") pod \"188f6bfb-7531-44aa-890b-6658e39aa184\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.410425 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7w5c\" (UniqueName: \"kubernetes.io/projected/188f6bfb-7531-44aa-890b-6658e39aa184-kube-api-access-k7w5c\") pod \"188f6bfb-7531-44aa-890b-6658e39aa184\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.410512 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-ovsdbserver-sb\") pod \"188f6bfb-7531-44aa-890b-6658e39aa184\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.410628 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-ovsdbserver-nb\") pod \"188f6bfb-7531-44aa-890b-6658e39aa184\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.410661 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-config\") pod \"188f6bfb-7531-44aa-890b-6658e39aa184\" (UID: \"188f6bfb-7531-44aa-890b-6658e39aa184\") " Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.448607 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/188f6bfb-7531-44aa-890b-6658e39aa184-kube-api-access-k7w5c" (OuterVolumeSpecName: "kube-api-access-k7w5c") pod "188f6bfb-7531-44aa-890b-6658e39aa184" (UID: "188f6bfb-7531-44aa-890b-6658e39aa184"). InnerVolumeSpecName "kube-api-access-k7w5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.476208 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" event={"ID":"188f6bfb-7531-44aa-890b-6658e39aa184","Type":"ContainerDied","Data":"5fa9d43471dd37c44c39b590ba17138025c273cd3ede6549c5dd9cde8fc8653f"} Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.476296 4895 scope.go:117] "RemoveContainer" containerID="f392274ee669920d8a90e3d21da83c3dcd76a174babb73fe75efa8daed63bdee" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.476573 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zm6xl" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.499072 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "188f6bfb-7531-44aa-890b-6658e39aa184" (UID: "188f6bfb-7531-44aa-890b-6658e39aa184"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.502303 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee","Type":"ContainerStarted","Data":"81c3556aec612070be7e257554acc51d318b217a78e62173d8c2cc9acb2c418e"} Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.503313 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="ceilometer-central-agent" containerID="cri-o://d357bf26a0a34fec8373f6f84fb931011dc103ddf505bcd3608bef7bf78df7f1" gracePeriod=30 Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.503498 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="proxy-httpd" containerID="cri-o://81c3556aec612070be7e257554acc51d318b217a78e62173d8c2cc9acb2c418e" gracePeriod=30 Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.503577 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="sg-core" containerID="cri-o://cf4e6d2a5e74c27ec2dc66f6342b9d0c68f7ecfd4bcbf92b92ceff0a171d25cc" gracePeriod=30 Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.503561 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="ceilometer-notification-agent" containerID="cri-o://a960aa03999fc78683d8dddb933003aba863fb1fe3fa19a361f929143814fed7" gracePeriod=30 Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.503792 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.511798 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-config" (OuterVolumeSpecName: "config") pod "188f6bfb-7531-44aa-890b-6658e39aa184" (UID: "188f6bfb-7531-44aa-890b-6658e39aa184"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.512628 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-config-data-custom\") pod \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.512675 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-logs\") pod \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.512719 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-combined-ca-bundle\") pod \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.512749 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpjgb\" (UniqueName: \"kubernetes.io/projected/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-kube-api-access-lpjgb\") pod \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.513119 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-config-data\") pod \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\" (UID: \"e93af0a8-f847-4968-8e6c-8433e4f0e4c0\") " Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.514815 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7w5c\" (UniqueName: \"kubernetes.io/projected/188f6bfb-7531-44aa-890b-6658e39aa184-kube-api-access-k7w5c\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.514871 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.514887 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.515484 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-logs" (OuterVolumeSpecName: "logs") pod "e93af0a8-f847-4968-8e6c-8433e4f0e4c0" (UID: "e93af0a8-f847-4968-8e6c-8433e4f0e4c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.516825 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xqv89" event={"ID":"d3fc9317-29e0-4b1c-b598-5d95fc98a1e7","Type":"ContainerDied","Data":"e6b843be6d953836cf0f643f9591bd713359b27fb37ac11751bfc6711c491a4b"} Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.516937 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6b843be6d953836cf0f643f9591bd713359b27fb37ac11751bfc6711c491a4b" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.517056 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xqv89" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.517781 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "188f6bfb-7531-44aa-890b-6658e39aa184" (UID: "188f6bfb-7531-44aa-890b-6658e39aa184"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.522844 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "188f6bfb-7531-44aa-890b-6658e39aa184" (UID: "188f6bfb-7531-44aa-890b-6658e39aa184"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.531367 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-kube-api-access-lpjgb" (OuterVolumeSpecName: "kube-api-access-lpjgb") pod "e93af0a8-f847-4968-8e6c-8433e4f0e4c0" (UID: "e93af0a8-f847-4968-8e6c-8433e4f0e4c0"). InnerVolumeSpecName "kube-api-access-lpjgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.547333 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e93af0a8-f847-4968-8e6c-8433e4f0e4c0" (UID: "e93af0a8-f847-4968-8e6c-8433e4f0e4c0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.568109 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dd99c6f6d-xbc48" event={"ID":"e93af0a8-f847-4968-8e6c-8433e4f0e4c0","Type":"ContainerDied","Data":"97d26199b60ec6246460cddc1e91f7f23b2a3d78701105d9cf0567efeaede9d3"} Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.568269 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dd99c6f6d-xbc48" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.563891 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.712069018 podStartE2EDuration="58.563853s" podCreationTimestamp="2026-01-29 09:01:04 +0000 UTC" firstStartedPulling="2026-01-29 09:01:07.041187807 +0000 UTC m=+1208.682695953" lastFinishedPulling="2026-01-29 09:02:01.892971789 +0000 UTC m=+1263.534479935" observedRunningTime="2026-01-29 09:02:02.554274425 +0000 UTC m=+1264.195782591" watchObservedRunningTime="2026-01-29 09:02:02.563853 +0000 UTC m=+1264.205361146" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.585879 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e93af0a8-f847-4968-8e6c-8433e4f0e4c0" (UID: "e93af0a8-f847-4968-8e6c-8433e4f0e4c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.613587 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "188f6bfb-7531-44aa-890b-6658e39aa184" (UID: "188f6bfb-7531-44aa-890b-6658e39aa184"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.616697 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.616739 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.616749 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.616759 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.616769 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.616780 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpjgb\" (UniqueName: \"kubernetes.io/projected/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-kube-api-access-lpjgb\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.616793 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/188f6bfb-7531-44aa-890b-6658e39aa184-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.634840 4895 scope.go:117] "RemoveContainer" containerID="17c9aecdf5b0da00b14a1b5b0bcce0e348f674a638b93e2765dd676f506f6f01" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.660472 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-config-data" (OuterVolumeSpecName: "config-data") pod "e93af0a8-f847-4968-8e6c-8433e4f0e4c0" (UID: "e93af0a8-f847-4968-8e6c-8433e4f0e4c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.684701 4895 scope.go:117] "RemoveContainer" containerID="3e9ac2ac65d2f1c1212467eb31a281878a58d1087abb71599cd9334b54b7254f" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.720421 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93af0a8-f847-4968-8e6c-8433e4f0e4c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.734198 4895 scope.go:117] "RemoveContainer" containerID="115de8d75c349a8a406fcc38b1e95144ae10cb7a4ee5af5b1534c6408250e4a1" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.887898 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-645lz"] Jan 29 09:02:02 crc kubenswrapper[4895]: E0129 09:02:02.888599 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188f6bfb-7531-44aa-890b-6658e39aa184" containerName="init" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.888629 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="188f6bfb-7531-44aa-890b-6658e39aa184" containerName="init" Jan 29 09:02:02 crc kubenswrapper[4895]: E0129 09:02:02.888648 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03042a97-0311-4d0c-9878-380987ec9407" containerName="init" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.888656 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="03042a97-0311-4d0c-9878-380987ec9407" containerName="init" Jan 29 09:02:02 crc kubenswrapper[4895]: E0129 09:02:02.888668 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3fc9317-29e0-4b1c-b598-5d95fc98a1e7" containerName="cinder-db-sync" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.888680 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3fc9317-29e0-4b1c-b598-5d95fc98a1e7" containerName="cinder-db-sync" Jan 29 09:02:02 crc kubenswrapper[4895]: E0129 09:02:02.888730 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.888740 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api" Jan 29 09:02:02 crc kubenswrapper[4895]: E0129 09:02:02.888751 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03042a97-0311-4d0c-9878-380987ec9407" containerName="ironic-db-sync" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.888759 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="03042a97-0311-4d0c-9878-380987ec9407" containerName="ironic-db-sync" Jan 29 09:02:02 crc kubenswrapper[4895]: E0129 09:02:02.888783 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api-log" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.888793 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api-log" Jan 29 09:02:02 crc kubenswrapper[4895]: E0129 09:02:02.888811 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188f6bfb-7531-44aa-890b-6658e39aa184" containerName="dnsmasq-dns" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.888819 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="188f6bfb-7531-44aa-890b-6658e39aa184" containerName="dnsmasq-dns" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.889137 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api-log" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.889168 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="03042a97-0311-4d0c-9878-380987ec9407" containerName="ironic-db-sync" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.889190 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.889207 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.889221 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3fc9317-29e0-4b1c-b598-5d95fc98a1e7" containerName="cinder-db-sync" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.889236 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="188f6bfb-7531-44aa-890b-6658e39aa184" containerName="dnsmasq-dns" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.890253 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-645lz" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.955715 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zm6xl"] Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.968002 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zm6xl"] Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.981101 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-78c59f886f-xtrfg"] Jan 29 09:02:02 crc kubenswrapper[4895]: E0129 09:02:02.981864 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.981890 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" containerName="barbican-api" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.983170 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.996132 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Jan 29 09:02:02 crc kubenswrapper[4895]: I0129 09:02:02.996414 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-dockercfg-zk8bm" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.013001 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-645lz"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.028901 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bttt7\" (UniqueName: \"kubernetes.io/projected/0854dd31-b444-4b7a-b397-7d623977d1f5-kube-api-access-bttt7\") pod \"ironic-inspector-db-create-645lz\" (UID: \"0854dd31-b444-4b7a-b397-7d623977d1f5\") " pod="openstack/ironic-inspector-db-create-645lz" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.043642 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0854dd31-b444-4b7a-b397-7d623977d1f5-operator-scripts\") pod \"ironic-inspector-db-create-645lz\" (UID: \"0854dd31-b444-4b7a-b397-7d623977d1f5\") " pod="openstack/ironic-inspector-db-create-645lz" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.044034 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-78c59f886f-xtrfg"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.080360 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6dd99c6f6d-xbc48"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.112032 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6dd99c6f6d-xbc48"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.245828 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bttt7\" (UniqueName: \"kubernetes.io/projected/0854dd31-b444-4b7a-b397-7d623977d1f5-kube-api-access-bttt7\") pod \"ironic-inspector-db-create-645lz\" (UID: \"0854dd31-b444-4b7a-b397-7d623977d1f5\") " pod="openstack/ironic-inspector-db-create-645lz" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.245955 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl8wb\" (UniqueName: \"kubernetes.io/projected/844ab9b8-4b72-401d-b008-db11605452a8-kube-api-access-dl8wb\") pod \"ironic-neutron-agent-78c59f886f-xtrfg\" (UID: \"844ab9b8-4b72-401d-b008-db11605452a8\") " pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.246007 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/844ab9b8-4b72-401d-b008-db11605452a8-config\") pod \"ironic-neutron-agent-78c59f886f-xtrfg\" (UID: \"844ab9b8-4b72-401d-b008-db11605452a8\") " pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.246127 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844ab9b8-4b72-401d-b008-db11605452a8-combined-ca-bundle\") pod \"ironic-neutron-agent-78c59f886f-xtrfg\" (UID: \"844ab9b8-4b72-401d-b008-db11605452a8\") " pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.246182 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0854dd31-b444-4b7a-b397-7d623977d1f5-operator-scripts\") pod \"ironic-inspector-db-create-645lz\" (UID: \"0854dd31-b444-4b7a-b397-7d623977d1f5\") " pod="openstack/ironic-inspector-db-create-645lz" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.259385 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0854dd31-b444-4b7a-b397-7d623977d1f5-operator-scripts\") pod \"ironic-inspector-db-create-645lz\" (UID: \"0854dd31-b444-4b7a-b397-7d623977d1f5\") " pod="openstack/ironic-inspector-db-create-645lz" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.338531 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bttt7\" (UniqueName: \"kubernetes.io/projected/0854dd31-b444-4b7a-b397-7d623977d1f5-kube-api-access-bttt7\") pod \"ironic-inspector-db-create-645lz\" (UID: \"0854dd31-b444-4b7a-b397-7d623977d1f5\") " pod="openstack/ironic-inspector-db-create-645lz" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.391334 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl8wb\" (UniqueName: \"kubernetes.io/projected/844ab9b8-4b72-401d-b008-db11605452a8-kube-api-access-dl8wb\") pod \"ironic-neutron-agent-78c59f886f-xtrfg\" (UID: \"844ab9b8-4b72-401d-b008-db11605452a8\") " pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.392954 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/844ab9b8-4b72-401d-b008-db11605452a8-config\") pod \"ironic-neutron-agent-78c59f886f-xtrfg\" (UID: \"844ab9b8-4b72-401d-b008-db11605452a8\") " pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.393254 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844ab9b8-4b72-401d-b008-db11605452a8-combined-ca-bundle\") pod \"ironic-neutron-agent-78c59f886f-xtrfg\" (UID: \"844ab9b8-4b72-401d-b008-db11605452a8\") " pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.399268 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="188f6bfb-7531-44aa-890b-6658e39aa184" path="/var/lib/kubelet/pods/188f6bfb-7531-44aa-890b-6658e39aa184/volumes" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.421698 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/844ab9b8-4b72-401d-b008-db11605452a8-config\") pod \"ironic-neutron-agent-78c59f886f-xtrfg\" (UID: \"844ab9b8-4b72-401d-b008-db11605452a8\") " pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.422522 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844ab9b8-4b72-401d-b008-db11605452a8-combined-ca-bundle\") pod \"ironic-neutron-agent-78c59f886f-xtrfg\" (UID: \"844ab9b8-4b72-401d-b008-db11605452a8\") " pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.423197 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e93af0a8-f847-4968-8e6c-8433e4f0e4c0" path="/var/lib/kubelet/pods/e93af0a8-f847-4968-8e6c-8433e4f0e4c0/volumes" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.424281 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-3fd2-account-create-update-86dsg"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.428309 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl8wb\" (UniqueName: \"kubernetes.io/projected/844ab9b8-4b72-401d-b008-db11605452a8-kube-api-access-dl8wb\") pod \"ironic-neutron-agent-78c59f886f-xtrfg\" (UID: \"844ab9b8-4b72-401d-b008-db11605452a8\") " pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.434744 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-b8978dc4d-mk89b"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.435029 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-3fd2-account-create-update-86dsg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.438503 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.440889 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.472065 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.473201 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.473549 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.474452 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-3fd2-account-create-update-86dsg"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.478395 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.488644 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.492351 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.503938 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-b8978dc4d-mk89b"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.510255 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.510355 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6sgwh" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.510650 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.510870 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.516152 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-645lz" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.524325 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.546858 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-d7mhv"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.548815 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.559003 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-d7mhv"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602278 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4j5l\" (UniqueName: \"kubernetes.io/projected/d97428ad-71cd-4135-8e6e-157d27ddb70f-kube-api-access-c4j5l\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602344 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-etc-podinfo\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602375 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-combined-ca-bundle\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602408 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602433 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t54vs\" (UniqueName: \"kubernetes.io/projected/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-kube-api-access-t54vs\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602507 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d97428ad-71cd-4135-8e6e-157d27ddb70f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602533 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602572 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602595 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-scripts\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602614 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602636 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-logs\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602657 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e059385f-a716-4bd6-96ff-825e1fac5216-operator-scripts\") pod \"ironic-inspector-3fd2-account-create-update-86dsg\" (UID: \"e059385f-a716-4bd6-96ff-825e1fac5216\") " pod="openstack/ironic-inspector-3fd2-account-create-update-86dsg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602686 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data-merged\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602710 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602733 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fksh5\" (UniqueName: \"kubernetes.io/projected/e059385f-a716-4bd6-96ff-825e1fac5216-kube-api-access-fksh5\") pod \"ironic-inspector-3fd2-account-create-update-86dsg\" (UID: \"e059385f-a716-4bd6-96ff-825e1fac5216\") " pod="openstack/ironic-inspector-3fd2-account-create-update-86dsg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.602758 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data-custom\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.605626 4895 generic.go:334] "Generic (PLEG): container finished" podID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerID="81c3556aec612070be7e257554acc51d318b217a78e62173d8c2cc9acb2c418e" exitCode=0 Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.605666 4895 generic.go:334] "Generic (PLEG): container finished" podID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerID="cf4e6d2a5e74c27ec2dc66f6342b9d0c68f7ecfd4bcbf92b92ceff0a171d25cc" exitCode=2 Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.605676 4895 generic.go:334] "Generic (PLEG): container finished" podID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerID="d357bf26a0a34fec8373f6f84fb931011dc103ddf505bcd3608bef7bf78df7f1" exitCode=0 Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.605785 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee","Type":"ContainerDied","Data":"81c3556aec612070be7e257554acc51d318b217a78e62173d8c2cc9acb2c418e"} Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.605826 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee","Type":"ContainerDied","Data":"cf4e6d2a5e74c27ec2dc66f6342b9d0c68f7ecfd4bcbf92b92ceff0a171d25cc"} Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.605838 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee","Type":"ContainerDied","Data":"d357bf26a0a34fec8373f6f84fb931011dc103ddf505bcd3608bef7bf78df7f1"} Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.609388 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.668684 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.671054 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.678837 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.706004 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.706079 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t54vs\" (UniqueName: \"kubernetes.io/projected/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-kube-api-access-t54vs\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.706750 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-dns-svc\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.706820 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d97428ad-71cd-4135-8e6e-157d27ddb70f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.706866 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.706893 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.706979 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707026 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-scripts\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707057 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707096 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-logs\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707124 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e059385f-a716-4bd6-96ff-825e1fac5216-operator-scripts\") pod \"ironic-inspector-3fd2-account-create-update-86dsg\" (UID: \"e059385f-a716-4bd6-96ff-825e1fac5216\") " pod="openstack/ironic-inspector-3fd2-account-create-update-86dsg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707151 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8stz\" (UniqueName: \"kubernetes.io/projected/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-kube-api-access-b8stz\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707203 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data-merged\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707241 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707276 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fksh5\" (UniqueName: \"kubernetes.io/projected/e059385f-a716-4bd6-96ff-825e1fac5216-kube-api-access-fksh5\") pod \"ironic-inspector-3fd2-account-create-update-86dsg\" (UID: \"e059385f-a716-4bd6-96ff-825e1fac5216\") " pod="openstack/ironic-inspector-3fd2-account-create-update-86dsg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707307 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data-custom\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707379 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4j5l\" (UniqueName: \"kubernetes.io/projected/d97428ad-71cd-4135-8e6e-157d27ddb70f-kube-api-access-c4j5l\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707415 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707442 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-config\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707493 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-etc-podinfo\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707531 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-combined-ca-bundle\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.707560 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.710271 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data-merged\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.712301 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d97428ad-71cd-4135-8e6e-157d27ddb70f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.717054 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-logs\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.718237 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e059385f-a716-4bd6-96ff-825e1fac5216-operator-scripts\") pod \"ironic-inspector-3fd2-account-create-update-86dsg\" (UID: \"e059385f-a716-4bd6-96ff-825e1fac5216\") " pod="openstack/ironic-inspector-3fd2-account-create-update-86dsg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.718935 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.721651 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.725293 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.734200 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-combined-ca-bundle\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.737512 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-etc-podinfo\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.739448 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.739624 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.748543 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-scripts\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.753204 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t54vs\" (UniqueName: \"kubernetes.io/projected/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-kube-api-access-t54vs\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.768290 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4j5l\" (UniqueName: \"kubernetes.io/projected/d97428ad-71cd-4135-8e6e-157d27ddb70f-kube-api-access-c4j5l\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.781791 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fksh5\" (UniqueName: \"kubernetes.io/projected/e059385f-a716-4bd6-96ff-825e1fac5216-kube-api-access-fksh5\") pod \"ironic-inspector-3fd2-account-create-update-86dsg\" (UID: \"e059385f-a716-4bd6-96ff-825e1fac5216\") " pod="openstack/ironic-inspector-3fd2-account-create-update-86dsg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.782108 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.795625 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data-custom\") pod \"ironic-b8978dc4d-mk89b\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.812499 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-config-data-custom\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.813392 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-scripts\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.815459 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-dns-svc\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.815504 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15832a7e-aad3-40f5-8515-f53f55ecfaca-logs\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.815566 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.815633 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.815746 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8stz\" (UniqueName: \"kubernetes.io/projected/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-kube-api-access-b8stz\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.815819 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15832a7e-aad3-40f5-8515-f53f55ecfaca-etc-machine-id\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.815861 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swqtf\" (UniqueName: \"kubernetes.io/projected/15832a7e-aad3-40f5-8515-f53f55ecfaca-kube-api-access-swqtf\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.816511 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-config\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.817770 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-dns-svc\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.819529 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.820767 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.820792 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-config\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.821084 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-config-data\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.821198 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.821471 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.827593 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.846127 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8stz\" (UniqueName: \"kubernetes.io/projected/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-kube-api-access-b8stz\") pod \"dnsmasq-dns-6578955fd5-d7mhv\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.891568 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.895868 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.902011 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.902712 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.918289 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-3fd2-account-create-update-86dsg" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.924330 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.925849 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15832a7e-aad3-40f5-8515-f53f55ecfaca-etc-machine-id\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.925903 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swqtf\" (UniqueName: \"kubernetes.io/projected/15832a7e-aad3-40f5-8515-f53f55ecfaca-kube-api-access-swqtf\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.925988 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-config-data\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.926056 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-config-data-custom\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.926083 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-scripts\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.926130 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15832a7e-aad3-40f5-8515-f53f55ecfaca-logs\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.926173 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.928667 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15832a7e-aad3-40f5-8515-f53f55ecfaca-etc-machine-id\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.932884 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.934719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15832a7e-aad3-40f5-8515-f53f55ecfaca-logs\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.935006 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-config-data\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.940449 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-scripts\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.940615 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-config-data-custom\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.942196 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.944086 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.958962 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swqtf\" (UniqueName: \"kubernetes.io/projected/15832a7e-aad3-40f5-8515-f53f55ecfaca-kube-api-access-swqtf\") pod \"cinder-api-0\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " pod="openstack/cinder-api-0" Jan 29 09:02:03 crc kubenswrapper[4895]: I0129 09:02:03.973014 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.006096 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.028372 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f893b3e3-3833-4a94-ab55-951f600fdadd-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.028573 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.028618 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f893b3e3-3833-4a94-ab55-951f600fdadd-scripts\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.028756 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f893b3e3-3833-4a94-ab55-951f600fdadd-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.029615 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f893b3e3-3833-4a94-ab55-951f600fdadd-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.029641 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f893b3e3-3833-4a94-ab55-951f600fdadd-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.030030 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f893b3e3-3833-4a94-ab55-951f600fdadd-config-data\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.030062 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r94v\" (UniqueName: \"kubernetes.io/projected/f893b3e3-3833-4a94-ab55-951f600fdadd-kube-api-access-6r94v\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.132660 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f893b3e3-3833-4a94-ab55-951f600fdadd-config-data\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.133166 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r94v\" (UniqueName: \"kubernetes.io/projected/f893b3e3-3833-4a94-ab55-951f600fdadd-kube-api-access-6r94v\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.133358 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f893b3e3-3833-4a94-ab55-951f600fdadd-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.133443 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.133474 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f893b3e3-3833-4a94-ab55-951f600fdadd-scripts\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.133497 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f893b3e3-3833-4a94-ab55-951f600fdadd-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.133599 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f893b3e3-3833-4a94-ab55-951f600fdadd-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.133623 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f893b3e3-3833-4a94-ab55-951f600fdadd-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.135079 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.140572 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f893b3e3-3833-4a94-ab55-951f600fdadd-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.142141 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f893b3e3-3833-4a94-ab55-951f600fdadd-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.146420 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f893b3e3-3833-4a94-ab55-951f600fdadd-config-data\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.157639 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f893b3e3-3833-4a94-ab55-951f600fdadd-scripts\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.161147 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r94v\" (UniqueName: \"kubernetes.io/projected/f893b3e3-3833-4a94-ab55-951f600fdadd-kube-api-access-6r94v\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.167685 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f893b3e3-3833-4a94-ab55-951f600fdadd-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.168280 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f893b3e3-3833-4a94-ab55-951f600fdadd-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.191991 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ironic-conductor-0\" (UID: \"f893b3e3-3833-4a94-ab55-951f600fdadd\") " pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.229203 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.282451 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-645lz"] Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.379474 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-78c59f886f-xtrfg"] Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.636685 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" event={"ID":"844ab9b8-4b72-401d-b008-db11605452a8","Type":"ContainerStarted","Data":"ca9504dae57f5c025f8475a3e0d1c2769191841778f82f64c47f16a6bd7fba51"} Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.640608 4895 generic.go:334] "Generic (PLEG): container finished" podID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerID="a960aa03999fc78683d8dddb933003aba863fb1fe3fa19a361f929143814fed7" exitCode=0 Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.640661 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee","Type":"ContainerDied","Data":"a960aa03999fc78683d8dddb933003aba863fb1fe3fa19a361f929143814fed7"} Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.641946 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-645lz" event={"ID":"0854dd31-b444-4b7a-b397-7d623977d1f5","Type":"ContainerStarted","Data":"b871fca8ed39bd918e0b0fded7790e5a0fbfff5203fe4afdf67743280fb73709"} Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.765032 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-b8978dc4d-mk89b"] Jan 29 09:02:04 crc kubenswrapper[4895]: W0129 09:02:04.778754 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ce22da7_d86c_4a45_97ca_f67baee5d1fc.slice/crio-52eefe5758f610d1ed7c9b1a2160c70f893bd30939dbfa7616a6ff16bd9cbf30 WatchSource:0}: Error finding container 52eefe5758f610d1ed7c9b1a2160c70f893bd30939dbfa7616a6ff16bd9cbf30: Status 404 returned error can't find the container with id 52eefe5758f610d1ed7c9b1a2160c70f893bd30939dbfa7616a6ff16bd9cbf30 Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.881483 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:02:04 crc kubenswrapper[4895]: W0129 09:02:04.959402 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode059385f_a716_4bd6_96ff_825e1fac5216.slice/crio-d9987e9e31bd89c6400e901467013a9dcab20b90f956811ff031d72e4d441e3c WatchSource:0}: Error finding container d9987e9e31bd89c6400e901467013a9dcab20b90f956811ff031d72e4d441e3c: Status 404 returned error can't find the container with id d9987e9e31bd89c6400e901467013a9dcab20b90f956811ff031d72e4d441e3c Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.960600 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-3fd2-account-create-update-86dsg"] Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.976006 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xsl4\" (UniqueName: \"kubernetes.io/projected/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-kube-api-access-4xsl4\") pod \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.976165 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-config-data\") pod \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.976247 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-scripts\") pod \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.976371 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-run-httpd\") pod \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.976406 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-combined-ca-bundle\") pod \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.976464 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-log-httpd\") pod \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.976695 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-sg-core-conf-yaml\") pod \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\" (UID: \"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee\") " Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.978282 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" (UID: "ae0b4183-6fb3-4f5a-86e6-ffe4330616ee"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.978749 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" (UID: "ae0b4183-6fb3-4f5a-86e6-ffe4330616ee"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.986269 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-kube-api-access-4xsl4" (OuterVolumeSpecName: "kube-api-access-4xsl4") pod "ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" (UID: "ae0b4183-6fb3-4f5a-86e6-ffe4330616ee"). InnerVolumeSpecName "kube-api-access-4xsl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:04 crc kubenswrapper[4895]: I0129 09:02:04.989300 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-scripts" (OuterVolumeSpecName: "scripts") pod "ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" (UID: "ae0b4183-6fb3-4f5a-86e6-ffe4330616ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.020140 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" (UID: "ae0b4183-6fb3-4f5a-86e6-ffe4330616ee"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.082487 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.082523 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.082532 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.082541 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.082554 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xsl4\" (UniqueName: \"kubernetes.io/projected/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-kube-api-access-4xsl4\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:05 crc kubenswrapper[4895]: W0129 09:02:05.119185 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd97428ad_71cd_4135_8e6e_157d27ddb70f.slice/crio-2e33cfa793e724efd672bdd58d29ce2a8f693d3e17ce88e59839bad8659037eb WatchSource:0}: Error finding container 2e33cfa793e724efd672bdd58d29ce2a8f693d3e17ce88e59839bad8659037eb: Status 404 returned error can't find the container with id 2e33cfa793e724efd672bdd58d29ce2a8f693d3e17ce88e59839bad8659037eb Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.119383 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:02:05 crc kubenswrapper[4895]: W0129 09:02:05.125256 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac55a7b1_fcd1_4d76_964b_7b0f1c2b7e57.slice/crio-43d33b0340083bcac9a375ff1343dd4542366d54c75e9440bb50db08f55967f3 WatchSource:0}: Error finding container 43d33b0340083bcac9a375ff1343dd4542366d54c75e9440bb50db08f55967f3: Status 404 returned error can't find the container with id 43d33b0340083bcac9a375ff1343dd4542366d54c75e9440bb50db08f55967f3 Jan 29 09:02:05 crc kubenswrapper[4895]: W0129 09:02:05.156766 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15832a7e_aad3_40f5_8515_f53f55ecfaca.slice/crio-e5d160cab68f078b6dd18d162d30e2b097855dc5a55ba8f831c43ca70f42abc9 WatchSource:0}: Error finding container e5d160cab68f078b6dd18d162d30e2b097855dc5a55ba8f831c43ca70f42abc9: Status 404 returned error can't find the container with id e5d160cab68f078b6dd18d162d30e2b097855dc5a55ba8f831c43ca70f42abc9 Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.156947 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-config-data" (OuterVolumeSpecName: "config-data") pod "ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" (UID: "ae0b4183-6fb3-4f5a-86e6-ffe4330616ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:05 crc kubenswrapper[4895]: W0129 09:02:05.157413 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf893b3e3_3833_4a94_ab55_951f600fdadd.slice/crio-f1e815332a27b9621a68f44cac6346b169c00ed099f0003f396b4b1d1cdc257a WatchSource:0}: Error finding container f1e815332a27b9621a68f44cac6346b169c00ed099f0003f396b4b1d1cdc257a: Status 404 returned error can't find the container with id f1e815332a27b9621a68f44cac6346b169c00ed099f0003f396b4b1d1cdc257a Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.167512 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" (UID: "ae0b4183-6fb3-4f5a-86e6-ffe4330616ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.179539 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-d7mhv"] Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.184749 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.184799 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.282649 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.282697 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.407209 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.698288 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d97428ad-71cd-4135-8e6e-157d27ddb70f","Type":"ContainerStarted","Data":"2e33cfa793e724efd672bdd58d29ce2a8f693d3e17ce88e59839bad8659037eb"} Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.711640 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"15832a7e-aad3-40f5-8515-f53f55ecfaca","Type":"ContainerStarted","Data":"e5d160cab68f078b6dd18d162d30e2b097855dc5a55ba8f831c43ca70f42abc9"} Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.714182 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b8978dc4d-mk89b" event={"ID":"7ce22da7-d86c-4a45-97ca-f67baee5d1fc","Type":"ContainerStarted","Data":"52eefe5758f610d1ed7c9b1a2160c70f893bd30939dbfa7616a6ff16bd9cbf30"} Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.721310 4895 generic.go:334] "Generic (PLEG): container finished" podID="0854dd31-b444-4b7a-b397-7d623977d1f5" containerID="0e431e4940db82a907ad982e40e05fe07f5c7ee2915ce96cc7161c1edfa54abe" exitCode=0 Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.721485 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-645lz" event={"ID":"0854dd31-b444-4b7a-b397-7d623977d1f5","Type":"ContainerDied","Data":"0e431e4940db82a907ad982e40e05fe07f5c7ee2915ce96cc7161c1edfa54abe"} Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.753372 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae0b4183-6fb3-4f5a-86e6-ffe4330616ee","Type":"ContainerDied","Data":"4ec978bb0a1cd35ff59acbefb9e06054f86e15eb21ff8933d8bd3b2d4dc1acd2"} Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.753438 4895 scope.go:117] "RemoveContainer" containerID="81c3556aec612070be7e257554acc51d318b217a78e62173d8c2cc9acb2c418e" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.753622 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.759811 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f893b3e3-3833-4a94-ab55-951f600fdadd","Type":"ContainerStarted","Data":"f1e815332a27b9621a68f44cac6346b169c00ed099f0003f396b4b1d1cdc257a"} Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.801662 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.823368 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" event={"ID":"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57","Type":"ContainerStarted","Data":"6f6d94f9b72f1f6de06eca4cc7f25490088c9eb005917235e4c5387465c67cfb"} Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.823430 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" event={"ID":"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57","Type":"ContainerStarted","Data":"43d33b0340083bcac9a375ff1343dd4542366d54c75e9440bb50db08f55967f3"} Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.831029 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.840553 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:05 crc kubenswrapper[4895]: E0129 09:02:05.842129 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="proxy-httpd" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.842154 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="proxy-httpd" Jan 29 09:02:05 crc kubenswrapper[4895]: E0129 09:02:05.842171 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="ceilometer-notification-agent" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.842178 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="ceilometer-notification-agent" Jan 29 09:02:05 crc kubenswrapper[4895]: E0129 09:02:05.842187 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="sg-core" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.842194 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="sg-core" Jan 29 09:02:05 crc kubenswrapper[4895]: E0129 09:02:05.842208 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="ceilometer-central-agent" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.842215 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="ceilometer-central-agent" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.842471 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="proxy-httpd" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.842489 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="ceilometer-central-agent" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.842501 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="sg-core" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.842527 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" containerName="ceilometer-notification-agent" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.845647 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.850094 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.850777 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.852828 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.930907 4895 generic.go:334] "Generic (PLEG): container finished" podID="e059385f-a716-4bd6-96ff-825e1fac5216" containerID="b360403ff3246432ef6cc95d44263e39be8969781516702aea433430bd3069a5" exitCode=0 Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.930997 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-3fd2-account-create-update-86dsg" event={"ID":"e059385f-a716-4bd6-96ff-825e1fac5216","Type":"ContainerDied","Data":"b360403ff3246432ef6cc95d44263e39be8969781516702aea433430bd3069a5"} Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.931034 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-3fd2-account-create-update-86dsg" event={"ID":"e059385f-a716-4bd6-96ff-825e1fac5216","Type":"ContainerStarted","Data":"d9987e9e31bd89c6400e901467013a9dcab20b90f956811ff031d72e4d441e3c"} Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.933648 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/310c3619-a1a7-4137-8e90-4646ec724cb8-run-httpd\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.933804 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-config-data\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.933876 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.933968 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-scripts\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.934103 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/310c3619-a1a7-4137-8e90-4646ec724cb8-log-httpd\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.934400 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:05 crc kubenswrapper[4895]: I0129 09:02:05.934501 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2tj\" (UniqueName: \"kubernetes.io/projected/310c3619-a1a7-4137-8e90-4646ec724cb8-kube-api-access-gq2tj\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.029568 4895 scope.go:117] "RemoveContainer" containerID="cf4e6d2a5e74c27ec2dc66f6342b9d0c68f7ecfd4bcbf92b92ceff0a171d25cc" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.037402 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq2tj\" (UniqueName: \"kubernetes.io/projected/310c3619-a1a7-4137-8e90-4646ec724cb8-kube-api-access-gq2tj\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.038195 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/310c3619-a1a7-4137-8e90-4646ec724cb8-run-httpd\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.038333 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-config-data\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.038423 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.038519 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-scripts\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.038668 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/310c3619-a1a7-4137-8e90-4646ec724cb8-log-httpd\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.038802 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.038850 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/310c3619-a1a7-4137-8e90-4646ec724cb8-run-httpd\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.040187 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/310c3619-a1a7-4137-8e90-4646ec724cb8-log-httpd\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.045116 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.045395 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-scripts\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.047294 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-config-data\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.050642 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.057281 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq2tj\" (UniqueName: \"kubernetes.io/projected/310c3619-a1a7-4137-8e90-4646ec724cb8-kube-api-access-gq2tj\") pod \"ceilometer-0\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.103774 4895 scope.go:117] "RemoveContainer" containerID="a960aa03999fc78683d8dddb933003aba863fb1fe3fa19a361f929143814fed7" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.170239 4895 scope.go:117] "RemoveContainer" containerID="d357bf26a0a34fec8373f6f84fb931011dc103ddf505bcd3608bef7bf78df7f1" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.206806 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:02:06 crc kubenswrapper[4895]: I0129 09:02:06.926198 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.023866 4895 generic.go:334] "Generic (PLEG): container finished" podID="ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" containerID="6f6d94f9b72f1f6de06eca4cc7f25490088c9eb005917235e4c5387465c67cfb" exitCode=0 Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.024001 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" event={"ID":"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57","Type":"ContainerDied","Data":"6f6d94f9b72f1f6de06eca4cc7f25490088c9eb005917235e4c5387465c67cfb"} Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.024115 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.024132 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" event={"ID":"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57","Type":"ContainerStarted","Data":"1054f5738a4e556614c0471c9472a322dc5ecd1d3b74e7ea0fe5f879934a9adf"} Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.049032 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"15832a7e-aad3-40f5-8515-f53f55ecfaca","Type":"ContainerStarted","Data":"b694169882a3bd3f1a2409a2f46ec4e1dc46aa93430b06e0df3ceda90d16456f"} Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.057772 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" podStartSLOduration=4.057744708 podStartE2EDuration="4.057744708s" podCreationTimestamp="2026-01-29 09:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:02:07.051024297 +0000 UTC m=+1268.692532453" watchObservedRunningTime="2026-01-29 09:02:07.057744708 +0000 UTC m=+1268.699252844" Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.088620 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f893b3e3-3833-4a94-ab55-951f600fdadd","Type":"ContainerStarted","Data":"58de8a323624b4878b6f5a4f701f28d39ab636dc5868218395af6d12f91f6879"} Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.244827 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae0b4183-6fb3-4f5a-86e6-ffe4330616ee" path="/var/lib/kubelet/pods/ae0b4183-6fb3-4f5a-86e6-ffe4330616ee/volumes" Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.762857 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-645lz" Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.769190 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-3fd2-account-create-update-86dsg" Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.888954 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0854dd31-b444-4b7a-b397-7d623977d1f5-operator-scripts\") pod \"0854dd31-b444-4b7a-b397-7d623977d1f5\" (UID: \"0854dd31-b444-4b7a-b397-7d623977d1f5\") " Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.889226 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e059385f-a716-4bd6-96ff-825e1fac5216-operator-scripts\") pod \"e059385f-a716-4bd6-96ff-825e1fac5216\" (UID: \"e059385f-a716-4bd6-96ff-825e1fac5216\") " Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.889265 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bttt7\" (UniqueName: \"kubernetes.io/projected/0854dd31-b444-4b7a-b397-7d623977d1f5-kube-api-access-bttt7\") pod \"0854dd31-b444-4b7a-b397-7d623977d1f5\" (UID: \"0854dd31-b444-4b7a-b397-7d623977d1f5\") " Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.889338 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fksh5\" (UniqueName: \"kubernetes.io/projected/e059385f-a716-4bd6-96ff-825e1fac5216-kube-api-access-fksh5\") pod \"e059385f-a716-4bd6-96ff-825e1fac5216\" (UID: \"e059385f-a716-4bd6-96ff-825e1fac5216\") " Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.890045 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0854dd31-b444-4b7a-b397-7d623977d1f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0854dd31-b444-4b7a-b397-7d623977d1f5" (UID: "0854dd31-b444-4b7a-b397-7d623977d1f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.891996 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e059385f-a716-4bd6-96ff-825e1fac5216-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e059385f-a716-4bd6-96ff-825e1fac5216" (UID: "e059385f-a716-4bd6-96ff-825e1fac5216"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.898194 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0854dd31-b444-4b7a-b397-7d623977d1f5-kube-api-access-bttt7" (OuterVolumeSpecName: "kube-api-access-bttt7") pod "0854dd31-b444-4b7a-b397-7d623977d1f5" (UID: "0854dd31-b444-4b7a-b397-7d623977d1f5"). InnerVolumeSpecName "kube-api-access-bttt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.906772 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e059385f-a716-4bd6-96ff-825e1fac5216-kube-api-access-fksh5" (OuterVolumeSpecName: "kube-api-access-fksh5") pod "e059385f-a716-4bd6-96ff-825e1fac5216" (UID: "e059385f-a716-4bd6-96ff-825e1fac5216"). InnerVolumeSpecName "kube-api-access-fksh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.993424 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e059385f-a716-4bd6-96ff-825e1fac5216-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.993500 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bttt7\" (UniqueName: \"kubernetes.io/projected/0854dd31-b444-4b7a-b397-7d623977d1f5-kube-api-access-bttt7\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.993520 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fksh5\" (UniqueName: \"kubernetes.io/projected/e059385f-a716-4bd6-96ff-825e1fac5216-kube-api-access-fksh5\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:07 crc kubenswrapper[4895]: I0129 09:02:07.993534 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0854dd31-b444-4b7a-b397-7d623977d1f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:08 crc kubenswrapper[4895]: I0129 09:02:08.111734 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d97428ad-71cd-4135-8e6e-157d27ddb70f","Type":"ContainerStarted","Data":"1c78dd5771a3d8ca629ce90e502c041507fabb2cf1172b3ebf2f9e962a879c94"} Jan 29 09:02:08 crc kubenswrapper[4895]: I0129 09:02:08.113864 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-645lz" event={"ID":"0854dd31-b444-4b7a-b397-7d623977d1f5","Type":"ContainerDied","Data":"b871fca8ed39bd918e0b0fded7790e5a0fbfff5203fe4afdf67743280fb73709"} Jan 29 09:02:08 crc kubenswrapper[4895]: I0129 09:02:08.113895 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b871fca8ed39bd918e0b0fded7790e5a0fbfff5203fe4afdf67743280fb73709" Jan 29 09:02:08 crc kubenswrapper[4895]: I0129 09:02:08.113963 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-645lz" Jan 29 09:02:08 crc kubenswrapper[4895]: I0129 09:02:08.123381 4895 generic.go:334] "Generic (PLEG): container finished" podID="f893b3e3-3833-4a94-ab55-951f600fdadd" containerID="58de8a323624b4878b6f5a4f701f28d39ab636dc5868218395af6d12f91f6879" exitCode=0 Jan 29 09:02:08 crc kubenswrapper[4895]: I0129 09:02:08.123442 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f893b3e3-3833-4a94-ab55-951f600fdadd","Type":"ContainerDied","Data":"58de8a323624b4878b6f5a4f701f28d39ab636dc5868218395af6d12f91f6879"} Jan 29 09:02:08 crc kubenswrapper[4895]: I0129 09:02:08.129584 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"310c3619-a1a7-4137-8e90-4646ec724cb8","Type":"ContainerStarted","Data":"48ac0713e02890edee64f015ebeb24ef9446bef28cd41fc0fe3ea5e6c7967826"} Jan 29 09:02:08 crc kubenswrapper[4895]: I0129 09:02:08.140300 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-3fd2-account-create-update-86dsg" event={"ID":"e059385f-a716-4bd6-96ff-825e1fac5216","Type":"ContainerDied","Data":"d9987e9e31bd89c6400e901467013a9dcab20b90f956811ff031d72e4d441e3c"} Jan 29 09:02:08 crc kubenswrapper[4895]: I0129 09:02:08.140342 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-3fd2-account-create-update-86dsg" Jan 29 09:02:08 crc kubenswrapper[4895]: I0129 09:02:08.140407 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9987e9e31bd89c6400e901467013a9dcab20b90f956811ff031d72e4d441e3c" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.181695 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" event={"ID":"844ab9b8-4b72-401d-b008-db11605452a8","Type":"ContainerStarted","Data":"77fa1e99c07ec72511bcebcb5fa3fab46d5b1010872dfe3f6de803d0a9842f34"} Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.186078 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.195340 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b8978dc4d-mk89b" event={"ID":"7ce22da7-d86c-4a45-97ca-f67baee5d1fc","Type":"ContainerStarted","Data":"f3f9f86cafd20a3738dd88ae972fcf8176e80eec27ddb8ee04ebd230b3b54880"} Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.267219 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" podStartSLOduration=3.1382365 podStartE2EDuration="7.267185226s" podCreationTimestamp="2026-01-29 09:02:02 +0000 UTC" firstStartedPulling="2026-01-29 09:02:04.453002225 +0000 UTC m=+1266.094510371" lastFinishedPulling="2026-01-29 09:02:08.581950951 +0000 UTC m=+1270.223459097" observedRunningTime="2026-01-29 09:02:09.216629191 +0000 UTC m=+1270.858137337" watchObservedRunningTime="2026-01-29 09:02:09.267185226 +0000 UTC m=+1270.908693372" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.529671 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-7f7db74854-hkzkt"] Jan 29 09:02:09 crc kubenswrapper[4895]: E0129 09:02:09.530434 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0854dd31-b444-4b7a-b397-7d623977d1f5" containerName="mariadb-database-create" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.530463 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0854dd31-b444-4b7a-b397-7d623977d1f5" containerName="mariadb-database-create" Jan 29 09:02:09 crc kubenswrapper[4895]: E0129 09:02:09.530522 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e059385f-a716-4bd6-96ff-825e1fac5216" containerName="mariadb-account-create-update" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.530549 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e059385f-a716-4bd6-96ff-825e1fac5216" containerName="mariadb-account-create-update" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.530797 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e059385f-a716-4bd6-96ff-825e1fac5216" containerName="mariadb-account-create-update" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.530815 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0854dd31-b444-4b7a-b397-7d623977d1f5" containerName="mariadb-database-create" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.533059 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.543846 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.544187 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.548314 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-7f7db74854-hkzkt"] Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.692986 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-public-tls-certs\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.693578 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-internal-tls-certs\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.693680 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-combined-ca-bundle\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.693748 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-scripts\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.693839 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-config-data-merged\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.693860 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-config-data-custom\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.693972 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b58lx\" (UniqueName: \"kubernetes.io/projected/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-kube-api-access-b58lx\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.694002 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-logs\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.694079 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-etc-podinfo\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.694148 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-config-data\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.796367 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-public-tls-certs\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.796433 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-internal-tls-certs\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.796476 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-combined-ca-bundle\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.796516 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-scripts\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.796556 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-config-data-merged\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.796580 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-config-data-custom\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.796624 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b58lx\" (UniqueName: \"kubernetes.io/projected/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-kube-api-access-b58lx\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.796641 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-logs\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.796674 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-etc-podinfo\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.796708 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-config-data\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.799784 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-logs\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.802861 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-config-data-merged\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.812903 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-internal-tls-certs\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.813151 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-config-data-custom\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.813361 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-public-tls-certs\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.813613 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-scripts\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.823084 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-config-data\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.828805 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-combined-ca-bundle\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.829192 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-etc-podinfo\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.858561 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b58lx\" (UniqueName: \"kubernetes.io/projected/5105e55b-cea6-4b20-bf0a-f7f0410f8aa9-kube-api-access-b58lx\") pod \"ironic-7f7db74854-hkzkt\" (UID: \"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9\") " pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:09 crc kubenswrapper[4895]: I0129 09:02:09.882057 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:10 crc kubenswrapper[4895]: I0129 09:02:10.252708 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d97428ad-71cd-4135-8e6e-157d27ddb70f","Type":"ContainerStarted","Data":"689eb7317bc6e20c9682146344416f13cf748b2389c1963e1fd3bbce99996fd7"} Jan 29 09:02:10 crc kubenswrapper[4895]: I0129 09:02:10.263810 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"15832a7e-aad3-40f5-8515-f53f55ecfaca","Type":"ContainerStarted","Data":"b304167c543841602aa600c2559d69afcd67c5cf6e6f039086923952c9a39491"} Jan 29 09:02:10 crc kubenswrapper[4895]: I0129 09:02:10.264051 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="15832a7e-aad3-40f5-8515-f53f55ecfaca" containerName="cinder-api-log" containerID="cri-o://b694169882a3bd3f1a2409a2f46ec4e1dc46aa93430b06e0df3ceda90d16456f" gracePeriod=30 Jan 29 09:02:10 crc kubenswrapper[4895]: I0129 09:02:10.264188 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 09:02:10 crc kubenswrapper[4895]: I0129 09:02:10.264432 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="15832a7e-aad3-40f5-8515-f53f55ecfaca" containerName="cinder-api" containerID="cri-o://b304167c543841602aa600c2559d69afcd67c5cf6e6f039086923952c9a39491" gracePeriod=30 Jan 29 09:02:10 crc kubenswrapper[4895]: I0129 09:02:10.280427 4895 generic.go:334] "Generic (PLEG): container finished" podID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerID="f3f9f86cafd20a3738dd88ae972fcf8176e80eec27ddb8ee04ebd230b3b54880" exitCode=0 Jan 29 09:02:10 crc kubenswrapper[4895]: I0129 09:02:10.280657 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b8978dc4d-mk89b" event={"ID":"7ce22da7-d86c-4a45-97ca-f67baee5d1fc","Type":"ContainerDied","Data":"f3f9f86cafd20a3738dd88ae972fcf8176e80eec27ddb8ee04ebd230b3b54880"} Jan 29 09:02:10 crc kubenswrapper[4895]: I0129 09:02:10.314054 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"310c3619-a1a7-4137-8e90-4646ec724cb8","Type":"ContainerStarted","Data":"c95ffdcd861fc1f11ac608f9c9e6f5dc2543639d7660779fb9ee4cba6a010ddd"} Jan 29 09:02:10 crc kubenswrapper[4895]: I0129 09:02:10.320902 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.320870738 podStartE2EDuration="7.320870738s" podCreationTimestamp="2026-01-29 09:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:02:10.316408667 +0000 UTC m=+1271.957916813" watchObservedRunningTime="2026-01-29 09:02:10.320870738 +0000 UTC m=+1271.962378884" Jan 29 09:02:10 crc kubenswrapper[4895]: I0129 09:02:10.582891 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-7f7db74854-hkzkt"] Jan 29 09:02:11 crc kubenswrapper[4895]: I0129 09:02:11.310844 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:02:11 crc kubenswrapper[4895]: I0129 09:02:11.325473 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:02:11 crc kubenswrapper[4895]: I0129 09:02:11.332422 4895 generic.go:334] "Generic (PLEG): container finished" podID="15832a7e-aad3-40f5-8515-f53f55ecfaca" containerID="b694169882a3bd3f1a2409a2f46ec4e1dc46aa93430b06e0df3ceda90d16456f" exitCode=143 Jan 29 09:02:11 crc kubenswrapper[4895]: I0129 09:02:11.332526 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"15832a7e-aad3-40f5-8515-f53f55ecfaca","Type":"ContainerDied","Data":"b694169882a3bd3f1a2409a2f46ec4e1dc46aa93430b06e0df3ceda90d16456f"} Jan 29 09:02:11 crc kubenswrapper[4895]: I0129 09:02:11.355141 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b8978dc4d-mk89b" event={"ID":"7ce22da7-d86c-4a45-97ca-f67baee5d1fc","Type":"ContainerStarted","Data":"b906e53d37e693f1f7edca9a3cf1c5d3f02bd98a81fd66a9c6ff5caecd3ed106"} Jan 29 09:02:11 crc kubenswrapper[4895]: I0129 09:02:11.376107 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7f7db74854-hkzkt" event={"ID":"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9","Type":"ContainerStarted","Data":"b55bf65077fecbae1b4cdc0e4954d4fc0d5b74dab034fac3ee519516a9801706"} Jan 29 09:02:11 crc kubenswrapper[4895]: I0129 09:02:11.376161 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7f7db74854-hkzkt" event={"ID":"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9","Type":"ContainerStarted","Data":"36e4dad66bbcc5741406b88a96a2342d5d45e5844c1455a3e838c791640f7843"} Jan 29 09:02:11 crc kubenswrapper[4895]: I0129 09:02:11.478304 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.461717252 podStartE2EDuration="8.478269778s" podCreationTimestamp="2026-01-29 09:02:03 +0000 UTC" firstStartedPulling="2026-01-29 09:02:05.122279003 +0000 UTC m=+1266.763787149" lastFinishedPulling="2026-01-29 09:02:06.138831529 +0000 UTC m=+1267.780339675" observedRunningTime="2026-01-29 09:02:11.450249487 +0000 UTC m=+1273.091757633" watchObservedRunningTime="2026-01-29 09:02:11.478269778 +0000 UTC m=+1273.119777924" Jan 29 09:02:11 crc kubenswrapper[4895]: I0129 09:02:11.804634 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-796fb887fb-dd2s5" Jan 29 09:02:11 crc kubenswrapper[4895]: I0129 09:02:11.903677 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-56569dbddd-srzk5"] Jan 29 09:02:12 crc kubenswrapper[4895]: I0129 09:02:12.391176 4895 generic.go:334] "Generic (PLEG): container finished" podID="15832a7e-aad3-40f5-8515-f53f55ecfaca" containerID="b304167c543841602aa600c2559d69afcd67c5cf6e6f039086923952c9a39491" exitCode=0 Jan 29 09:02:12 crc kubenswrapper[4895]: I0129 09:02:12.391278 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"15832a7e-aad3-40f5-8515-f53f55ecfaca","Type":"ContainerDied","Data":"b304167c543841602aa600c2559d69afcd67c5cf6e6f039086923952c9a39491"} Jan 29 09:02:12 crc kubenswrapper[4895]: I0129 09:02:12.398751 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b8978dc4d-mk89b" event={"ID":"7ce22da7-d86c-4a45-97ca-f67baee5d1fc","Type":"ContainerStarted","Data":"d98174274baabb008ddd663863e74051e35710bf292009bd777c67ac20f88c44"} Jan 29 09:02:12 crc kubenswrapper[4895]: I0129 09:02:12.400353 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:12 crc kubenswrapper[4895]: I0129 09:02:12.440069 4895 generic.go:334] "Generic (PLEG): container finished" podID="5105e55b-cea6-4b20-bf0a-f7f0410f8aa9" containerID="b55bf65077fecbae1b4cdc0e4954d4fc0d5b74dab034fac3ee519516a9801706" exitCode=0 Jan 29 09:02:12 crc kubenswrapper[4895]: I0129 09:02:12.440217 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7f7db74854-hkzkt" event={"ID":"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9","Type":"ContainerDied","Data":"b55bf65077fecbae1b4cdc0e4954d4fc0d5b74dab034fac3ee519516a9801706"} Jan 29 09:02:12 crc kubenswrapper[4895]: I0129 09:02:12.449571 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-b8978dc4d-mk89b" podStartSLOduration=5.664505312 podStartE2EDuration="9.449547541s" podCreationTimestamp="2026-01-29 09:02:03 +0000 UTC" firstStartedPulling="2026-01-29 09:02:04.810722392 +0000 UTC m=+1266.452230538" lastFinishedPulling="2026-01-29 09:02:08.595764621 +0000 UTC m=+1270.237272767" observedRunningTime="2026-01-29 09:02:12.440606772 +0000 UTC m=+1274.082114918" watchObservedRunningTime="2026-01-29 09:02:12.449547541 +0000 UTC m=+1274.091055687" Jan 29 09:02:12 crc kubenswrapper[4895]: I0129 09:02:12.455765 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"310c3619-a1a7-4137-8e90-4646ec724cb8","Type":"ContainerStarted","Data":"a4cd4bd94745299566c79a20695635d3af9886e3e850bd2458640ad3deb9376c"} Jan 29 09:02:12 crc kubenswrapper[4895]: I0129 09:02:12.756236 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5f75d78756-glzhf" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.180721 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.315020 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-scripts\") pod \"15832a7e-aad3-40f5-8515-f53f55ecfaca\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.315114 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-config-data-custom\") pod \"15832a7e-aad3-40f5-8515-f53f55ecfaca\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.315163 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15832a7e-aad3-40f5-8515-f53f55ecfaca-logs\") pod \"15832a7e-aad3-40f5-8515-f53f55ecfaca\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.315251 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-config-data\") pod \"15832a7e-aad3-40f5-8515-f53f55ecfaca\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.315292 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15832a7e-aad3-40f5-8515-f53f55ecfaca-etc-machine-id\") pod \"15832a7e-aad3-40f5-8515-f53f55ecfaca\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.315405 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-combined-ca-bundle\") pod \"15832a7e-aad3-40f5-8515-f53f55ecfaca\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.315432 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swqtf\" (UniqueName: \"kubernetes.io/projected/15832a7e-aad3-40f5-8515-f53f55ecfaca-kube-api-access-swqtf\") pod \"15832a7e-aad3-40f5-8515-f53f55ecfaca\" (UID: \"15832a7e-aad3-40f5-8515-f53f55ecfaca\") " Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.317157 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15832a7e-aad3-40f5-8515-f53f55ecfaca-logs" (OuterVolumeSpecName: "logs") pod "15832a7e-aad3-40f5-8515-f53f55ecfaca" (UID: "15832a7e-aad3-40f5-8515-f53f55ecfaca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.318088 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15832a7e-aad3-40f5-8515-f53f55ecfaca-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "15832a7e-aad3-40f5-8515-f53f55ecfaca" (UID: "15832a7e-aad3-40f5-8515-f53f55ecfaca"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.339297 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15832a7e-aad3-40f5-8515-f53f55ecfaca-kube-api-access-swqtf" (OuterVolumeSpecName: "kube-api-access-swqtf") pod "15832a7e-aad3-40f5-8515-f53f55ecfaca" (UID: "15832a7e-aad3-40f5-8515-f53f55ecfaca"). InnerVolumeSpecName "kube-api-access-swqtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.347856 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "15832a7e-aad3-40f5-8515-f53f55ecfaca" (UID: "15832a7e-aad3-40f5-8515-f53f55ecfaca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.355166 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-scripts" (OuterVolumeSpecName: "scripts") pod "15832a7e-aad3-40f5-8515-f53f55ecfaca" (UID: "15832a7e-aad3-40f5-8515-f53f55ecfaca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.417674 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.417711 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.417723 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15832a7e-aad3-40f5-8515-f53f55ecfaca-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.417730 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15832a7e-aad3-40f5-8515-f53f55ecfaca-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.417739 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swqtf\" (UniqueName: \"kubernetes.io/projected/15832a7e-aad3-40f5-8515-f53f55ecfaca-kube-api-access-swqtf\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.438172 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-config-data" (OuterVolumeSpecName: "config-data") pod "15832a7e-aad3-40f5-8515-f53f55ecfaca" (UID: "15832a7e-aad3-40f5-8515-f53f55ecfaca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.453034 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15832a7e-aad3-40f5-8515-f53f55ecfaca" (UID: "15832a7e-aad3-40f5-8515-f53f55ecfaca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.489045 4895 generic.go:334] "Generic (PLEG): container finished" podID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerID="d98174274baabb008ddd663863e74051e35710bf292009bd777c67ac20f88c44" exitCode=1 Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.489764 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b8978dc4d-mk89b" event={"ID":"7ce22da7-d86c-4a45-97ca-f67baee5d1fc","Type":"ContainerDied","Data":"d98174274baabb008ddd663863e74051e35710bf292009bd777c67ac20f88c44"} Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.490962 4895 scope.go:117] "RemoveContainer" containerID="d98174274baabb008ddd663863e74051e35710bf292009bd777c67ac20f88c44" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.500898 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7f7db74854-hkzkt" event={"ID":"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9","Type":"ContainerStarted","Data":"a9918ac3dbf8018daa26e62ff38c3333bcc5bb821c20902137695417c3bd0b7d"} Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.500977 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7f7db74854-hkzkt" event={"ID":"5105e55b-cea6-4b20-bf0a-f7f0410f8aa9","Type":"ContainerStarted","Data":"d4c40bb9ada7f5cd9f43f402a07a7a2688e35ba9398cf8a1fa28a0a16e354cf5"} Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.502855 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.518225 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"310c3619-a1a7-4137-8e90-4646ec724cb8","Type":"ContainerStarted","Data":"36e4a3c0a319e7349b63292e5299578ef2d7528004c704a0cbab5e1dd3e6ea70"} Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.519871 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.519899 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15832a7e-aad3-40f5-8515-f53f55ecfaca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.527054 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"15832a7e-aad3-40f5-8515-f53f55ecfaca","Type":"ContainerDied","Data":"e5d160cab68f078b6dd18d162d30e2b097855dc5a55ba8f831c43ca70f42abc9"} Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.527096 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.527126 4895 scope.go:117] "RemoveContainer" containerID="b304167c543841602aa600c2559d69afcd67c5cf6e6f039086923952c9a39491" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.527218 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-56569dbddd-srzk5" podUID="d2537c60-3372-4ac4-b801-808c93e9cf6f" containerName="placement-log" containerID="cri-o://6072e11f24fb74d79213dd357f09ecf4eade987e75900d63c8c2f3c6fc544655" gracePeriod=30 Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.527340 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-56569dbddd-srzk5" podUID="d2537c60-3372-4ac4-b801-808c93e9cf6f" containerName="placement-api" containerID="cri-o://c89d89f0739397f15d0ce3d5e15228dd0148bb6c356a08ddcd3a367add57bd84" gracePeriod=30 Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.557070 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-7f7db74854-hkzkt" podStartSLOduration=4.557034264 podStartE2EDuration="4.557034264s" podCreationTimestamp="2026-01-29 09:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:02:13.548763443 +0000 UTC m=+1275.190271589" watchObservedRunningTime="2026-01-29 09:02:13.557034264 +0000 UTC m=+1275.198542410" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.611798 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.612998 4895 scope.go:117] "RemoveContainer" containerID="b694169882a3bd3f1a2409a2f46ec4e1dc46aa93430b06e0df3ceda90d16456f" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.635928 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.654101 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:02:13 crc kubenswrapper[4895]: E0129 09:02:13.654728 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15832a7e-aad3-40f5-8515-f53f55ecfaca" containerName="cinder-api" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.654750 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="15832a7e-aad3-40f5-8515-f53f55ecfaca" containerName="cinder-api" Jan 29 09:02:13 crc kubenswrapper[4895]: E0129 09:02:13.654772 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15832a7e-aad3-40f5-8515-f53f55ecfaca" containerName="cinder-api-log" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.654780 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="15832a7e-aad3-40f5-8515-f53f55ecfaca" containerName="cinder-api-log" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.655100 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="15832a7e-aad3-40f5-8515-f53f55ecfaca" containerName="cinder-api-log" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.655123 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="15832a7e-aad3-40f5-8515-f53f55ecfaca" containerName="cinder-api" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.656758 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.662589 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.663712 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.663897 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.673047 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.728304 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-scripts\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.728721 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e4960bc-f10d-48c0-835d-9616ae852ec8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.729243 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.729422 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.729519 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e4960bc-f10d-48c0-835d-9616ae852ec8-logs\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.729619 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-config-data\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.729726 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.729839 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbxs6\" (UniqueName: \"kubernetes.io/projected/2e4960bc-f10d-48c0-835d-9616ae852ec8-kube-api-access-kbxs6\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.729907 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.733665 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.831355 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-scripts\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.831458 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e4960bc-f10d-48c0-835d-9616ae852ec8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.831504 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.831600 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.831637 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e4960bc-f10d-48c0-835d-9616ae852ec8-logs\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.831680 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-config-data\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.831742 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.831789 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbxs6\" (UniqueName: \"kubernetes.io/projected/2e4960bc-f10d-48c0-835d-9616ae852ec8-kube-api-access-kbxs6\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.831812 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.835288 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e4960bc-f10d-48c0-835d-9616ae852ec8-logs\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.835750 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e4960bc-f10d-48c0-835d-9616ae852ec8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.842691 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.846841 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.848604 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-config-data\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.849306 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-scripts\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.849482 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.867100 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4960bc-f10d-48c0-835d-9616ae852ec8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.879861 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbxs6\" (UniqueName: \"kubernetes.io/projected/2e4960bc-f10d-48c0-835d-9616ae852ec8-kube-api-access-kbxs6\") pod \"cinder-api-0\" (UID: \"2e4960bc-f10d-48c0-835d-9616ae852ec8\") " pod="openstack/cinder-api-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.926071 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.944046 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.976813 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:02:13 crc kubenswrapper[4895]: I0129 09:02:13.996219 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 09:02:14 crc kubenswrapper[4895]: I0129 09:02:14.065119 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-x7jwl"] Jan 29 09:02:14 crc kubenswrapper[4895]: I0129 09:02:14.065397 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" podUID="c62414b9-e9f1-4a5c-8448-09565e6fd3e8" containerName="dnsmasq-dns" containerID="cri-o://5f6011f07a38b30411edc845dc463413e2f781c4866e0fe214c028b16096a181" gracePeriod=10 Jan 29 09:02:14 crc kubenswrapper[4895]: I0129 09:02:14.279168 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" podUID="c62414b9-e9f1-4a5c-8448-09565e6fd3e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.159:5353: connect: connection refused" Jan 29 09:02:14 crc kubenswrapper[4895]: I0129 09:02:14.490491 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 09:02:14 crc kubenswrapper[4895]: I0129 09:02:14.588047 4895 generic.go:334] "Generic (PLEG): container finished" podID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerID="95aa02dc7aa67f6cf1091d5e7509299fefa0ae9085c5b4932366c4d3d008f653" exitCode=1 Jan 29 09:02:14 crc kubenswrapper[4895]: I0129 09:02:14.588159 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b8978dc4d-mk89b" event={"ID":"7ce22da7-d86c-4a45-97ca-f67baee5d1fc","Type":"ContainerDied","Data":"95aa02dc7aa67f6cf1091d5e7509299fefa0ae9085c5b4932366c4d3d008f653"} Jan 29 09:02:14 crc kubenswrapper[4895]: I0129 09:02:14.588204 4895 scope.go:117] "RemoveContainer" containerID="d98174274baabb008ddd663863e74051e35710bf292009bd777c67ac20f88c44" Jan 29 09:02:14 crc kubenswrapper[4895]: I0129 09:02:14.589044 4895 scope.go:117] "RemoveContainer" containerID="95aa02dc7aa67f6cf1091d5e7509299fefa0ae9085c5b4932366c4d3d008f653" Jan 29 09:02:14 crc kubenswrapper[4895]: E0129 09:02:14.589384 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-b8978dc4d-mk89b_openstack(7ce22da7-d86c-4a45-97ca-f67baee5d1fc)\"" pod="openstack/ironic-b8978dc4d-mk89b" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" Jan 29 09:02:14 crc kubenswrapper[4895]: I0129 09:02:14.593010 4895 generic.go:334] "Generic (PLEG): container finished" podID="d2537c60-3372-4ac4-b801-808c93e9cf6f" containerID="6072e11f24fb74d79213dd357f09ecf4eade987e75900d63c8c2f3c6fc544655" exitCode=143 Jan 29 09:02:14 crc kubenswrapper[4895]: I0129 09:02:14.593108 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56569dbddd-srzk5" event={"ID":"d2537c60-3372-4ac4-b801-808c93e9cf6f","Type":"ContainerDied","Data":"6072e11f24fb74d79213dd357f09ecf4eade987e75900d63c8c2f3c6fc544655"} Jan 29 09:02:14 crc kubenswrapper[4895]: I0129 09:02:14.606044 4895 generic.go:334] "Generic (PLEG): container finished" podID="c62414b9-e9f1-4a5c-8448-09565e6fd3e8" containerID="5f6011f07a38b30411edc845dc463413e2f781c4866e0fe214c028b16096a181" exitCode=0 Jan 29 09:02:14 crc kubenswrapper[4895]: I0129 09:02:14.606535 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" event={"ID":"c62414b9-e9f1-4a5c-8448-09565e6fd3e8","Type":"ContainerDied","Data":"5f6011f07a38b30411edc845dc463413e2f781c4866e0fe214c028b16096a181"} Jan 29 09:02:14 crc kubenswrapper[4895]: I0129 09:02:14.673789 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.227433 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15832a7e-aad3-40f5-8515-f53f55ecfaca" path="/var/lib/kubelet/pods/15832a7e-aad3-40f5-8515-f53f55ecfaca/volumes" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.390621 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.393200 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.399327 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-p7858" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.399638 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.399895 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.409051 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.492148 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-openstack-config\") pod \"openstackclient\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.492386 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-openstack-config-secret\") pod \"openstackclient\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.492486 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.492607 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd5md\" (UniqueName: \"kubernetes.io/projected/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-kube-api-access-zd5md\") pod \"openstackclient\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.594559 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-openstack-config\") pod \"openstackclient\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.594718 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-openstack-config-secret\") pod \"openstackclient\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.594787 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.594860 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd5md\" (UniqueName: \"kubernetes.io/projected/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-kube-api-access-zd5md\") pod \"openstackclient\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.597747 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-openstack-config\") pod \"openstackclient\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.605708 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-openstack-config-secret\") pod \"openstackclient\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.617636 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.618543 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd5md\" (UniqueName: \"kubernetes.io/projected/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-kube-api-access-zd5md\") pod \"openstackclient\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.637617 4895 generic.go:334] "Generic (PLEG): container finished" podID="844ab9b8-4b72-401d-b008-db11605452a8" containerID="77fa1e99c07ec72511bcebcb5fa3fab46d5b1010872dfe3f6de803d0a9842f34" exitCode=1 Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.638200 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" event={"ID":"844ab9b8-4b72-401d-b008-db11605452a8","Type":"ContainerDied","Data":"77fa1e99c07ec72511bcebcb5fa3fab46d5b1010872dfe3f6de803d0a9842f34"} Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.638253 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d97428ad-71cd-4135-8e6e-157d27ddb70f" containerName="cinder-scheduler" containerID="cri-o://1c78dd5771a3d8ca629ce90e502c041507fabb2cf1172b3ebf2f9e962a879c94" gracePeriod=30 Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.638433 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d97428ad-71cd-4135-8e6e-157d27ddb70f" containerName="probe" containerID="cri-o://689eb7317bc6e20c9682146344416f13cf748b2389c1963e1fd3bbce99996fd7" gracePeriod=30 Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.640166 4895 scope.go:117] "RemoveContainer" containerID="95aa02dc7aa67f6cf1091d5e7509299fefa0ae9085c5b4932366c4d3d008f653" Jan 29 09:02:15 crc kubenswrapper[4895]: E0129 09:02:15.640540 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-b8978dc4d-mk89b_openstack(7ce22da7-d86c-4a45-97ca-f67baee5d1fc)\"" pod="openstack/ironic-b8978dc4d-mk89b" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.641115 4895 scope.go:117] "RemoveContainer" containerID="77fa1e99c07ec72511bcebcb5fa3fab46d5b1010872dfe3f6de803d0a9842f34" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.766242 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.896014 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.921000 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.940980 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.943288 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 09:02:15 crc kubenswrapper[4895]: I0129 09:02:15.957713 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 09:02:16 crc kubenswrapper[4895]: I0129 09:02:16.108403 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10bf685-c7de-4126-afc5-6bd68c3e8845-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f10bf685-c7de-4126-afc5-6bd68c3e8845\") " pod="openstack/openstackclient" Jan 29 09:02:16 crc kubenswrapper[4895]: I0129 09:02:16.108504 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f10bf685-c7de-4126-afc5-6bd68c3e8845-openstack-config-secret\") pod \"openstackclient\" (UID: \"f10bf685-c7de-4126-afc5-6bd68c3e8845\") " pod="openstack/openstackclient" Jan 29 09:02:16 crc kubenswrapper[4895]: I0129 09:02:16.108584 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f10bf685-c7de-4126-afc5-6bd68c3e8845-openstack-config\") pod \"openstackclient\" (UID: \"f10bf685-c7de-4126-afc5-6bd68c3e8845\") " pod="openstack/openstackclient" Jan 29 09:02:16 crc kubenswrapper[4895]: I0129 09:02:16.108781 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fns4w\" (UniqueName: \"kubernetes.io/projected/f10bf685-c7de-4126-afc5-6bd68c3e8845-kube-api-access-fns4w\") pod \"openstackclient\" (UID: \"f10bf685-c7de-4126-afc5-6bd68c3e8845\") " pod="openstack/openstackclient" Jan 29 09:02:16 crc kubenswrapper[4895]: I0129 09:02:16.211027 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f10bf685-c7de-4126-afc5-6bd68c3e8845-openstack-config-secret\") pod \"openstackclient\" (UID: \"f10bf685-c7de-4126-afc5-6bd68c3e8845\") " pod="openstack/openstackclient" Jan 29 09:02:16 crc kubenswrapper[4895]: I0129 09:02:16.211139 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f10bf685-c7de-4126-afc5-6bd68c3e8845-openstack-config\") pod \"openstackclient\" (UID: \"f10bf685-c7de-4126-afc5-6bd68c3e8845\") " pod="openstack/openstackclient" Jan 29 09:02:16 crc kubenswrapper[4895]: I0129 09:02:16.211270 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fns4w\" (UniqueName: \"kubernetes.io/projected/f10bf685-c7de-4126-afc5-6bd68c3e8845-kube-api-access-fns4w\") pod \"openstackclient\" (UID: \"f10bf685-c7de-4126-afc5-6bd68c3e8845\") " pod="openstack/openstackclient" Jan 29 09:02:16 crc kubenswrapper[4895]: I0129 09:02:16.211354 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10bf685-c7de-4126-afc5-6bd68c3e8845-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f10bf685-c7de-4126-afc5-6bd68c3e8845\") " pod="openstack/openstackclient" Jan 29 09:02:16 crc kubenswrapper[4895]: I0129 09:02:16.212544 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f10bf685-c7de-4126-afc5-6bd68c3e8845-openstack-config\") pod \"openstackclient\" (UID: \"f10bf685-c7de-4126-afc5-6bd68c3e8845\") " pod="openstack/openstackclient" Jan 29 09:02:16 crc kubenswrapper[4895]: I0129 09:02:16.221871 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10bf685-c7de-4126-afc5-6bd68c3e8845-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f10bf685-c7de-4126-afc5-6bd68c3e8845\") " pod="openstack/openstackclient" Jan 29 09:02:16 crc kubenswrapper[4895]: I0129 09:02:16.223559 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f10bf685-c7de-4126-afc5-6bd68c3e8845-openstack-config-secret\") pod \"openstackclient\" (UID: \"f10bf685-c7de-4126-afc5-6bd68c3e8845\") " pod="openstack/openstackclient" Jan 29 09:02:16 crc kubenswrapper[4895]: I0129 09:02:16.266477 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fns4w\" (UniqueName: \"kubernetes.io/projected/f10bf685-c7de-4126-afc5-6bd68c3e8845-kube-api-access-fns4w\") pod \"openstackclient\" (UID: \"f10bf685-c7de-4126-afc5-6bd68c3e8845\") " pod="openstack/openstackclient" Jan 29 09:02:16 crc kubenswrapper[4895]: I0129 09:02:16.273818 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 09:02:17 crc kubenswrapper[4895]: E0129 09:02:17.617572 4895 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 29 09:02:17 crc kubenswrapper[4895]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4_0(7a31d979e8ad8ece60f1d644c81d2f0f2c738da451695a4122e4724cb6db0134): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7a31d979e8ad8ece60f1d644c81d2f0f2c738da451695a4122e4724cb6db0134" Netns:"/var/run/netns/c88c6f44-de12-467e-bd92-e7ed6e62f599" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=7a31d979e8ad8ece60f1d644c81d2f0f2c738da451695a4122e4724cb6db0134;K8S_POD_UID=ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4]: expected pod UID "ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4" but got "f10bf685-c7de-4126-afc5-6bd68c3e8845" from Kube API Jan 29 09:02:17 crc kubenswrapper[4895]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 09:02:17 crc kubenswrapper[4895]: > Jan 29 09:02:17 crc kubenswrapper[4895]: E0129 09:02:17.624441 4895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 29 09:02:17 crc kubenswrapper[4895]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4_0(7a31d979e8ad8ece60f1d644c81d2f0f2c738da451695a4122e4724cb6db0134): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7a31d979e8ad8ece60f1d644c81d2f0f2c738da451695a4122e4724cb6db0134" Netns:"/var/run/netns/c88c6f44-de12-467e-bd92-e7ed6e62f599" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=7a31d979e8ad8ece60f1d644c81d2f0f2c738da451695a4122e4724cb6db0134;K8S_POD_UID=ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4]: expected pod UID "ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4" but got "f10bf685-c7de-4126-afc5-6bd68c3e8845" from Kube API Jan 29 09:02:17 crc kubenswrapper[4895]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 09:02:17 crc kubenswrapper[4895]: > pod="openstack/openstackclient" Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.655378 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.756130 4895 generic.go:334] "Generic (PLEG): container finished" podID="d2537c60-3372-4ac4-b801-808c93e9cf6f" containerID="c89d89f0739397f15d0ce3d5e15228dd0148bb6c356a08ddcd3a367add57bd84" exitCode=0 Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.756243 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56569dbddd-srzk5" event={"ID":"d2537c60-3372-4ac4-b801-808c93e9cf6f","Type":"ContainerDied","Data":"c89d89f0739397f15d0ce3d5e15228dd0148bb6c356a08ddcd3a367add57bd84"} Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.761497 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-dns-swift-storage-0\") pod \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.761664 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9s4b\" (UniqueName: \"kubernetes.io/projected/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-kube-api-access-z9s4b\") pod \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.761730 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-dns-svc\") pod \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.761779 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-config\") pod \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.761837 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-ovsdbserver-nb\") pod \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.761892 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-ovsdbserver-sb\") pod \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\" (UID: \"c62414b9-e9f1-4a5c-8448-09565e6fd3e8\") " Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.790509 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-kube-api-access-z9s4b" (OuterVolumeSpecName: "kube-api-access-z9s4b") pod "c62414b9-e9f1-4a5c-8448-09565e6fd3e8" (UID: "c62414b9-e9f1-4a5c-8448-09565e6fd3e8"). InnerVolumeSpecName "kube-api-access-z9s4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.792045 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" event={"ID":"c62414b9-e9f1-4a5c-8448-09565e6fd3e8","Type":"ContainerDied","Data":"a691716ff38de0b18c63d45e1e1b25c533fb87d2e9fadde7886af87f6725648d"} Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.792139 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-x7jwl" Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.794176 4895 scope.go:117] "RemoveContainer" containerID="5f6011f07a38b30411edc845dc463413e2f781c4866e0fe214c028b16096a181" Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.840771 4895 generic.go:334] "Generic (PLEG): container finished" podID="d97428ad-71cd-4135-8e6e-157d27ddb70f" containerID="689eb7317bc6e20c9682146344416f13cf748b2389c1963e1fd3bbce99996fd7" exitCode=0 Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.841151 4895 generic.go:334] "Generic (PLEG): container finished" podID="d97428ad-71cd-4135-8e6e-157d27ddb70f" containerID="1c78dd5771a3d8ca629ce90e502c041507fabb2cf1172b3ebf2f9e962a879c94" exitCode=0 Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.841404 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d97428ad-71cd-4135-8e6e-157d27ddb70f","Type":"ContainerDied","Data":"689eb7317bc6e20c9682146344416f13cf748b2389c1963e1fd3bbce99996fd7"} Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.841580 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d97428ad-71cd-4135-8e6e-157d27ddb70f","Type":"ContainerDied","Data":"1c78dd5771a3d8ca629ce90e502c041507fabb2cf1172b3ebf2f9e962a879c94"} Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.866273 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.873268 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9s4b\" (UniqueName: \"kubernetes.io/projected/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-kube-api-access-z9s4b\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.893883 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.898994 4895 scope.go:117] "RemoveContainer" containerID="6cf876b9ccb15f27dca5a8f6d6173d11876af82e66557a745c2b5a7206f2923b" Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.901270 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4" podUID="f10bf685-c7de-4126-afc5-6bd68c3e8845" Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.966309 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.975664 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd5md\" (UniqueName: \"kubernetes.io/projected/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-kube-api-access-zd5md\") pod \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.975806 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-openstack-config\") pod \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.975962 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-openstack-config-secret\") pod \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.976295 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-combined-ca-bundle\") pod \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\" (UID: \"ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4\") " Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.977857 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4" (UID: "ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.993168 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4" (UID: "ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:17 crc kubenswrapper[4895]: I0129 09:02:17.994406 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-kube-api-access-zd5md" (OuterVolumeSpecName: "kube-api-access-zd5md") pod "ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4" (UID: "ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4"). InnerVolumeSpecName "kube-api-access-zd5md". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.011878 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4" (UID: "ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.013271 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-v7z9b"] Jan 29 09:02:18 crc kubenswrapper[4895]: E0129 09:02:18.014142 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2537c60-3372-4ac4-b801-808c93e9cf6f" containerName="placement-api" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.014169 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2537c60-3372-4ac4-b801-808c93e9cf6f" containerName="placement-api" Jan 29 09:02:18 crc kubenswrapper[4895]: E0129 09:02:18.014363 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c62414b9-e9f1-4a5c-8448-09565e6fd3e8" containerName="init" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.014380 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c62414b9-e9f1-4a5c-8448-09565e6fd3e8" containerName="init" Jan 29 09:02:18 crc kubenswrapper[4895]: E0129 09:02:18.014391 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c62414b9-e9f1-4a5c-8448-09565e6fd3e8" containerName="dnsmasq-dns" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.014399 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c62414b9-e9f1-4a5c-8448-09565e6fd3e8" containerName="dnsmasq-dns" Jan 29 09:02:18 crc kubenswrapper[4895]: E0129 09:02:18.014433 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2537c60-3372-4ac4-b801-808c93e9cf6f" containerName="placement-log" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.014445 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2537c60-3372-4ac4-b801-808c93e9cf6f" containerName="placement-log" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.014677 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2537c60-3372-4ac4-b801-808c93e9cf6f" containerName="placement-api" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.014694 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2537c60-3372-4ac4-b801-808c93e9cf6f" containerName="placement-log" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.014706 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c62414b9-e9f1-4a5c-8448-09565e6fd3e8" containerName="dnsmasq-dns" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.017060 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.021685 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.022027 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.037497 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-v7z9b"] Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.081122 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-combined-ca-bundle\") pod \"d2537c60-3372-4ac4-b801-808c93e9cf6f\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.081547 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-config-data\") pod \"d2537c60-3372-4ac4-b801-808c93e9cf6f\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.081596 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsnzq\" (UniqueName: \"kubernetes.io/projected/d2537c60-3372-4ac4-b801-808c93e9cf6f-kube-api-access-bsnzq\") pod \"d2537c60-3372-4ac4-b801-808c93e9cf6f\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.082021 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-internal-tls-certs\") pod \"d2537c60-3372-4ac4-b801-808c93e9cf6f\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.082126 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2537c60-3372-4ac4-b801-808c93e9cf6f-logs\") pod \"d2537c60-3372-4ac4-b801-808c93e9cf6f\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.082148 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-scripts\") pod \"d2537c60-3372-4ac4-b801-808c93e9cf6f\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.082260 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-public-tls-certs\") pod \"d2537c60-3372-4ac4-b801-808c93e9cf6f\" (UID: \"d2537c60-3372-4ac4-b801-808c93e9cf6f\") " Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.083750 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.083773 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.083785 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.083795 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd5md\" (UniqueName: \"kubernetes.io/projected/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4-kube-api-access-zd5md\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.084340 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2537c60-3372-4ac4-b801-808c93e9cf6f-logs" (OuterVolumeSpecName: "logs") pod "d2537c60-3372-4ac4-b801-808c93e9cf6f" (UID: "d2537c60-3372-4ac4-b801-808c93e9cf6f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.086898 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.099002 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c62414b9-e9f1-4a5c-8448-09565e6fd3e8" (UID: "c62414b9-e9f1-4a5c-8448-09565e6fd3e8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.124656 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-config" (OuterVolumeSpecName: "config") pod "c62414b9-e9f1-4a5c-8448-09565e6fd3e8" (UID: "c62414b9-e9f1-4a5c-8448-09565e6fd3e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.149145 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2537c60-3372-4ac4-b801-808c93e9cf6f-kube-api-access-bsnzq" (OuterVolumeSpecName: "kube-api-access-bsnzq") pod "d2537c60-3372-4ac4-b801-808c93e9cf6f" (UID: "d2537c60-3372-4ac4-b801-808c93e9cf6f"). InnerVolumeSpecName "kube-api-access-bsnzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.185735 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d97428ad-71cd-4135-8e6e-157d27ddb70f-etc-machine-id\") pod \"d97428ad-71cd-4135-8e6e-157d27ddb70f\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186017 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-combined-ca-bundle\") pod \"d97428ad-71cd-4135-8e6e-157d27ddb70f\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186048 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-config-data\") pod \"d97428ad-71cd-4135-8e6e-157d27ddb70f\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186067 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-scripts\") pod \"d97428ad-71cd-4135-8e6e-157d27ddb70f\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186112 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4j5l\" (UniqueName: \"kubernetes.io/projected/d97428ad-71cd-4135-8e6e-157d27ddb70f-kube-api-access-c4j5l\") pod \"d97428ad-71cd-4135-8e6e-157d27ddb70f\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186127 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-config-data-custom\") pod \"d97428ad-71cd-4135-8e6e-157d27ddb70f\" (UID: \"d97428ad-71cd-4135-8e6e-157d27ddb70f\") " Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186425 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrdf6\" (UniqueName: \"kubernetes.io/projected/1318c5c6-26bf-46e6-aba5-ab4e024be588-kube-api-access-nrdf6\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186469 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1318c5c6-26bf-46e6-aba5-ab4e024be588-etc-podinfo\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186507 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/1318c5c6-26bf-46e6-aba5-ab4e024be588-var-lib-ironic\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186555 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-combined-ca-bundle\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186582 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-config\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186634 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/1318c5c6-26bf-46e6-aba5-ab4e024be588-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186670 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-scripts\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186757 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186769 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsnzq\" (UniqueName: \"kubernetes.io/projected/d2537c60-3372-4ac4-b801-808c93e9cf6f-kube-api-access-bsnzq\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186781 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2537c60-3372-4ac4-b801-808c93e9cf6f-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.186794 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.193238 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.193322 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d97428ad-71cd-4135-8e6e-157d27ddb70f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d97428ad-71cd-4135-8e6e-157d27ddb70f" (UID: "d97428ad-71cd-4135-8e6e-157d27ddb70f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.196434 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-scripts" (OuterVolumeSpecName: "scripts") pod "d2537c60-3372-4ac4-b801-808c93e9cf6f" (UID: "d2537c60-3372-4ac4-b801-808c93e9cf6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: W0129 09:02:18.219207 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e4960bc_f10d_48c0_835d_9616ae852ec8.slice/crio-078c0fda40fca5bccce02288e51bda00831e2fa5a0d0ad5dd25198dd64e11a30 WatchSource:0}: Error finding container 078c0fda40fca5bccce02288e51bda00831e2fa5a0d0ad5dd25198dd64e11a30: Status 404 returned error can't find the container with id 078c0fda40fca5bccce02288e51bda00831e2fa5a0d0ad5dd25198dd64e11a30 Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.219230 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-scripts" (OuterVolumeSpecName: "scripts") pod "d97428ad-71cd-4135-8e6e-157d27ddb70f" (UID: "d97428ad-71cd-4135-8e6e-157d27ddb70f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.222408 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97428ad-71cd-4135-8e6e-157d27ddb70f-kube-api-access-c4j5l" (OuterVolumeSpecName: "kube-api-access-c4j5l") pod "d97428ad-71cd-4135-8e6e-157d27ddb70f" (UID: "d97428ad-71cd-4135-8e6e-157d27ddb70f"). InnerVolumeSpecName "kube-api-access-c4j5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.236208 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d97428ad-71cd-4135-8e6e-157d27ddb70f" (UID: "d97428ad-71cd-4135-8e6e-157d27ddb70f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.244952 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c62414b9-e9f1-4a5c-8448-09565e6fd3e8" (UID: "c62414b9-e9f1-4a5c-8448-09565e6fd3e8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.290086 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrdf6\" (UniqueName: \"kubernetes.io/projected/1318c5c6-26bf-46e6-aba5-ab4e024be588-kube-api-access-nrdf6\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.290140 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1318c5c6-26bf-46e6-aba5-ab4e024be588-etc-podinfo\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.290218 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/1318c5c6-26bf-46e6-aba5-ab4e024be588-var-lib-ironic\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.290294 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-combined-ca-bundle\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.290323 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-config\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.290399 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/1318c5c6-26bf-46e6-aba5-ab4e024be588-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.290450 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-scripts\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.290579 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d97428ad-71cd-4135-8e6e-157d27ddb70f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.290592 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.290602 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.290611 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.290622 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4j5l\" (UniqueName: \"kubernetes.io/projected/d97428ad-71cd-4135-8e6e-157d27ddb70f-kube-api-access-c4j5l\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.290632 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.292119 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/1318c5c6-26bf-46e6-aba5-ab4e024be588-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.292649 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c62414b9-e9f1-4a5c-8448-09565e6fd3e8" (UID: "c62414b9-e9f1-4a5c-8448-09565e6fd3e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.294859 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/1318c5c6-26bf-46e6-aba5-ab4e024be588-var-lib-ironic\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.312445 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1318c5c6-26bf-46e6-aba5-ab4e024be588-etc-podinfo\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.323881 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.329736 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-combined-ca-bundle\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.337606 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-config\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.362006 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-scripts\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.365553 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrdf6\" (UniqueName: \"kubernetes.io/projected/1318c5c6-26bf-46e6-aba5-ab4e024be588-kube-api-access-nrdf6\") pod \"ironic-inspector-db-sync-v7z9b\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: W0129 09:02:18.375207 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf10bf685_c7de_4126_afc5_6bd68c3e8845.slice/crio-cba4431caa20346b503ca2838353059a588b0ce49868126d7553c4e865f0d1dc WatchSource:0}: Error finding container cba4431caa20346b503ca2838353059a588b0ce49868126d7553c4e865f0d1dc: Status 404 returned error can't find the container with id cba4431caa20346b503ca2838353059a588b0ce49868126d7553c4e865f0d1dc Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.393287 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.399423 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c62414b9-e9f1-4a5c-8448-09565e6fd3e8" (UID: "c62414b9-e9f1-4a5c-8448-09565e6fd3e8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.425352 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d97428ad-71cd-4135-8e6e-157d27ddb70f" (UID: "d97428ad-71cd-4135-8e6e-157d27ddb70f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.483036 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-config-data" (OuterVolumeSpecName: "config-data") pod "d2537c60-3372-4ac4-b801-808c93e9cf6f" (UID: "d2537c60-3372-4ac4-b801-808c93e9cf6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.484579 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2537c60-3372-4ac4-b801-808c93e9cf6f" (UID: "d2537c60-3372-4ac4-b801-808c93e9cf6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.496170 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.496225 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c62414b9-e9f1-4a5c-8448-09565e6fd3e8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.496273 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.496287 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.582166 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d2537c60-3372-4ac4-b801-808c93e9cf6f" (UID: "d2537c60-3372-4ac4-b801-808c93e9cf6f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.600265 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d2537c60-3372-4ac4-b801-808c93e9cf6f" (UID: "d2537c60-3372-4ac4-b801-808c93e9cf6f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.601052 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.601095 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2537c60-3372-4ac4-b801-808c93e9cf6f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.603856 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-config-data" (OuterVolumeSpecName: "config-data") pod "d97428ad-71cd-4135-8e6e-157d27ddb70f" (UID: "d97428ad-71cd-4135-8e6e-157d27ddb70f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.610192 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.610281 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.650133 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.703742 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97428ad-71cd-4135-8e6e-157d27ddb70f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.739761 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-x7jwl"] Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.757036 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-x7jwl"] Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.896313 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f10bf685-c7de-4126-afc5-6bd68c3e8845","Type":"ContainerStarted","Data":"cba4431caa20346b503ca2838353059a588b0ce49868126d7553c4e865f0d1dc"} Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.899316 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e4960bc-f10d-48c0-835d-9616ae852ec8","Type":"ContainerStarted","Data":"078c0fda40fca5bccce02288e51bda00831e2fa5a0d0ad5dd25198dd64e11a30"} Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.903160 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" event={"ID":"844ab9b8-4b72-401d-b008-db11605452a8","Type":"ContainerStarted","Data":"b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763"} Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.903426 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.910771 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d97428ad-71cd-4135-8e6e-157d27ddb70f","Type":"ContainerDied","Data":"2e33cfa793e724efd672bdd58d29ce2a8f693d3e17ce88e59839bad8659037eb"} Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.910846 4895 scope.go:117] "RemoveContainer" containerID="689eb7317bc6e20c9682146344416f13cf748b2389c1963e1fd3bbce99996fd7" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.911036 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.927730 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.927793 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.929020 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.929521 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56569dbddd-srzk5" Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.931071 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56569dbddd-srzk5" event={"ID":"d2537c60-3372-4ac4-b801-808c93e9cf6f","Type":"ContainerDied","Data":"e4b44bacaea02ed29aae1c939f225fc040458159b14a315ded32563607144072"} Jan 29 09:02:18 crc kubenswrapper[4895]: I0129 09:02:18.934714 4895 scope.go:117] "RemoveContainer" containerID="95aa02dc7aa67f6cf1091d5e7509299fefa0ae9085c5b4932366c4d3d008f653" Jan 29 09:02:18 crc kubenswrapper[4895]: E0129 09:02:18.935096 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-b8978dc4d-mk89b_openstack(7ce22da7-d86c-4a45-97ca-f67baee5d1fc)\"" pod="openstack/ironic-b8978dc4d-mk89b" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.001375 4895 scope.go:117] "RemoveContainer" containerID="1c78dd5771a3d8ca629ce90e502c041507fabb2cf1172b3ebf2f9e962a879c94" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.030164 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4" podUID="f10bf685-c7de-4126-afc5-6bd68c3e8845" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.036206 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.088765 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.099751 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-56569dbddd-srzk5"] Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.110788 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-56569dbddd-srzk5"] Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.110875 4895 scope.go:117] "RemoveContainer" containerID="c89d89f0739397f15d0ce3d5e15228dd0148bb6c356a08ddcd3a367add57bd84" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.126006 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:02:19 crc kubenswrapper[4895]: E0129 09:02:19.126547 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97428ad-71cd-4135-8e6e-157d27ddb70f" containerName="probe" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.126563 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97428ad-71cd-4135-8e6e-157d27ddb70f" containerName="probe" Jan 29 09:02:19 crc kubenswrapper[4895]: E0129 09:02:19.126590 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97428ad-71cd-4135-8e6e-157d27ddb70f" containerName="cinder-scheduler" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.126597 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97428ad-71cd-4135-8e6e-157d27ddb70f" containerName="cinder-scheduler" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.126783 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97428ad-71cd-4135-8e6e-157d27ddb70f" containerName="cinder-scheduler" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.126797 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97428ad-71cd-4135-8e6e-157d27ddb70f" containerName="probe" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.127899 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.140378 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.143246 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.227982 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.228067 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.228122 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.228162 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-config-data\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.228191 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-scripts\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.228216 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxg72\" (UniqueName: \"kubernetes.io/projected/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-kube-api-access-cxg72\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.237197 4895 scope.go:117] "RemoveContainer" containerID="6072e11f24fb74d79213dd357f09ecf4eade987e75900d63c8c2f3c6fc544655" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.258326 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4" path="/var/lib/kubelet/pods/ac9b0273-6f7e-44b0-ba91-ebba4d0a3aa4/volumes" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.258848 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c62414b9-e9f1-4a5c-8448-09565e6fd3e8" path="/var/lib/kubelet/pods/c62414b9-e9f1-4a5c-8448-09565e6fd3e8/volumes" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.259677 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2537c60-3372-4ac4-b801-808c93e9cf6f" path="/var/lib/kubelet/pods/d2537c60-3372-4ac4-b801-808c93e9cf6f/volumes" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.264526 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d97428ad-71cd-4135-8e6e-157d27ddb70f" path="/var/lib/kubelet/pods/d97428ad-71cd-4135-8e6e-157d27ddb70f/volumes" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.336669 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.336784 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.336840 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.338050 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-config-data\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.338212 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-scripts\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.338275 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxg72\" (UniqueName: \"kubernetes.io/projected/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-kube-api-access-cxg72\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.339323 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.350237 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.350319 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-config-data\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.356490 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-scripts\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.360016 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.379186 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-v7z9b"] Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.380469 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxg72\" (UniqueName: \"kubernetes.io/projected/96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19-kube-api-access-cxg72\") pod \"cinder-scheduler-0\" (UID: \"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19\") " pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.495273 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.509941 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.970892 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 09:02:19 crc kubenswrapper[4895]: I0129 09:02:19.987877 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-v7z9b" event={"ID":"1318c5c6-26bf-46e6-aba5-ab4e024be588","Type":"ContainerStarted","Data":"638e0c5b9fd9965ce3dcc2200a5c9e17a12f1d24b0a6e75195024235796c9b04"} Jan 29 09:02:21 crc kubenswrapper[4895]: I0129 09:02:21.025320 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19","Type":"ContainerStarted","Data":"4aba49c3e45e8a2c59548c71dd61e9c310b165a3c02a7e8abd3b74032715b6e7"} Jan 29 09:02:21 crc kubenswrapper[4895]: I0129 09:02:21.026066 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19","Type":"ContainerStarted","Data":"3a1b83d28637603546a83ce1b4a9af1d79bc71eef5a749af3c0fe7dc2e6ff838"} Jan 29 09:02:21 crc kubenswrapper[4895]: I0129 09:02:21.027568 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e4960bc-f10d-48c0-835d-9616ae852ec8","Type":"ContainerStarted","Data":"3fbe3b5b4245a6d2ed7fa93641778ed939e5c671a2732a34b76391efeac35f5b"} Jan 29 09:02:21 crc kubenswrapper[4895]: I0129 09:02:21.039193 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"310c3619-a1a7-4137-8e90-4646ec724cb8","Type":"ContainerStarted","Data":"311af08ae17256cc378e2896cb6c9536bd9860885daab02179edd3c8141b5d00"} Jan 29 09:02:21 crc kubenswrapper[4895]: I0129 09:02:21.040781 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:02:21 crc kubenswrapper[4895]: I0129 09:02:21.093843 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.960418372 podStartE2EDuration="16.093813578s" podCreationTimestamp="2026-01-29 09:02:05 +0000 UTC" firstStartedPulling="2026-01-29 09:02:08.491354212 +0000 UTC m=+1270.132862358" lastFinishedPulling="2026-01-29 09:02:17.624749418 +0000 UTC m=+1279.266257564" observedRunningTime="2026-01-29 09:02:21.07039707 +0000 UTC m=+1282.711905216" watchObservedRunningTime="2026-01-29 09:02:21.093813578 +0000 UTC m=+1282.735321724" Jan 29 09:02:21 crc kubenswrapper[4895]: I0129 09:02:21.799151 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-7f7db74854-hkzkt" Jan 29 09:02:21 crc kubenswrapper[4895]: I0129 09:02:21.890761 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-b8978dc4d-mk89b"] Jan 29 09:02:21 crc kubenswrapper[4895]: I0129 09:02:21.891547 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-b8978dc4d-mk89b" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerName="ironic-api-log" containerID="cri-o://b906e53d37e693f1f7edca9a3cf1c5d3f02bd98a81fd66a9c6ff5caecd3ed106" gracePeriod=60 Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.102262 4895 generic.go:334] "Generic (PLEG): container finished" podID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerID="b906e53d37e693f1f7edca9a3cf1c5d3f02bd98a81fd66a9c6ff5caecd3ed106" exitCode=143 Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.104172 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b8978dc4d-mk89b" event={"ID":"7ce22da7-d86c-4a45-97ca-f67baee5d1fc","Type":"ContainerDied","Data":"b906e53d37e693f1f7edca9a3cf1c5d3f02bd98a81fd66a9c6ff5caecd3ed106"} Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.145250 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e4960bc-f10d-48c0-835d-9616ae852ec8","Type":"ContainerStarted","Data":"033e528f137138778954c559dd3035494456421534e9bccf16338812c9bbefe7"} Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.145314 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.173646 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.173617219 podStartE2EDuration="9.173617219s" podCreationTimestamp="2026-01-29 09:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:02:22.169804396 +0000 UTC m=+1283.811312562" watchObservedRunningTime="2026-01-29 09:02:22.173617219 +0000 UTC m=+1283.815125365" Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.180733 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6b548b4f8c-kc92t" Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.276506 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c686984cb-9nzt7"] Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.277153 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c686984cb-9nzt7" podUID="df4d1b85-39a1-46ad-8a21-60a165dbbf6d" containerName="neutron-api" containerID="cri-o://396eda2b3209074288f3195193f5775a3b1ffa1e6e4ee4854b7f6fad7a771887" gracePeriod=30 Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.277639 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c686984cb-9nzt7" podUID="df4d1b85-39a1-46ad-8a21-60a165dbbf6d" containerName="neutron-httpd" containerID="cri-o://37d6efe9ff64dc67d137d7446d909566f2a8b08d88bbbc5bfa19e50d85ce14ed" gracePeriod=30 Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.890598 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.960567 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data-custom\") pod \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.960702 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-logs\") pod \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.960774 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data-merged\") pod \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.960893 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data\") pod \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.960989 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t54vs\" (UniqueName: \"kubernetes.io/projected/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-kube-api-access-t54vs\") pod \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.961168 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-scripts\") pod \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.961233 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-combined-ca-bundle\") pod \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.961308 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-etc-podinfo\") pod \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\" (UID: \"7ce22da7-d86c-4a45-97ca-f67baee5d1fc\") " Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.966497 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-logs" (OuterVolumeSpecName: "logs") pod "7ce22da7-d86c-4a45-97ca-f67baee5d1fc" (UID: "7ce22da7-d86c-4a45-97ca-f67baee5d1fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.978439 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-kube-api-access-t54vs" (OuterVolumeSpecName: "kube-api-access-t54vs") pod "7ce22da7-d86c-4a45-97ca-f67baee5d1fc" (UID: "7ce22da7-d86c-4a45-97ca-f67baee5d1fc"). InnerVolumeSpecName "kube-api-access-t54vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.994173 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "7ce22da7-d86c-4a45-97ca-f67baee5d1fc" (UID: "7ce22da7-d86c-4a45-97ca-f67baee5d1fc"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:22 crc kubenswrapper[4895]: I0129 09:02:22.998973 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-scripts" (OuterVolumeSpecName: "scripts") pod "7ce22da7-d86c-4a45-97ca-f67baee5d1fc" (UID: "7ce22da7-d86c-4a45-97ca-f67baee5d1fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.003070 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "7ce22da7-d86c-4a45-97ca-f67baee5d1fc" (UID: "7ce22da7-d86c-4a45-97ca-f67baee5d1fc"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.011736 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7ce22da7-d86c-4a45-97ca-f67baee5d1fc" (UID: "7ce22da7-d86c-4a45-97ca-f67baee5d1fc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.011771 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data" (OuterVolumeSpecName: "config-data") pod "7ce22da7-d86c-4a45-97ca-f67baee5d1fc" (UID: "7ce22da7-d86c-4a45-97ca-f67baee5d1fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.065381 4895 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-etc-podinfo\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.065439 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.065453 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.065520 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.065535 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.065549 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t54vs\" (UniqueName: \"kubernetes.io/projected/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-kube-api-access-t54vs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.065562 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.091581 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ce22da7-d86c-4a45-97ca-f67baee5d1fc" (UID: "7ce22da7-d86c-4a45-97ca-f67baee5d1fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.168812 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce22da7-d86c-4a45-97ca-f67baee5d1fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.195145 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19","Type":"ContainerStarted","Data":"de1d0c4975bb5d681eff7e2675ba4f87c292ed81d93ead69313b3a01a5a1cb15"} Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.207713 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-b8978dc4d-mk89b" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.208411 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b8978dc4d-mk89b" event={"ID":"7ce22da7-d86c-4a45-97ca-f67baee5d1fc","Type":"ContainerDied","Data":"52eefe5758f610d1ed7c9b1a2160c70f893bd30939dbfa7616a6ff16bd9cbf30"} Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.208550 4895 scope.go:117] "RemoveContainer" containerID="95aa02dc7aa67f6cf1091d5e7509299fefa0ae9085c5b4932366c4d3d008f653" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.216323 4895 generic.go:334] "Generic (PLEG): container finished" podID="df4d1b85-39a1-46ad-8a21-60a165dbbf6d" containerID="37d6efe9ff64dc67d137d7446d909566f2a8b08d88bbbc5bfa19e50d85ce14ed" exitCode=0 Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.236902 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.236877706 podStartE2EDuration="4.236877706s" podCreationTimestamp="2026-01-29 09:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:02:23.235016667 +0000 UTC m=+1284.876524813" watchObservedRunningTime="2026-01-29 09:02:23.236877706 +0000 UTC m=+1284.878385852" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.256386 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c686984cb-9nzt7" event={"ID":"df4d1b85-39a1-46ad-8a21-60a165dbbf6d","Type":"ContainerDied","Data":"37d6efe9ff64dc67d137d7446d909566f2a8b08d88bbbc5bfa19e50d85ce14ed"} Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.293283 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-b8978dc4d-mk89b"] Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.308382 4895 scope.go:117] "RemoveContainer" containerID="b906e53d37e693f1f7edca9a3cf1c5d3f02bd98a81fd66a9c6ff5caecd3ed106" Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.312601 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-b8978dc4d-mk89b"] Jan 29 09:02:23 crc kubenswrapper[4895]: I0129 09:02:23.340019 4895 scope.go:117] "RemoveContainer" containerID="f3f9f86cafd20a3738dd88ae972fcf8176e80eec27ddb8ee04ebd230b3b54880" Jan 29 09:02:23 crc kubenswrapper[4895]: E0129 09:02:23.614041 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763 is running failed: container process not found" containerID="b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763" cmd=["/bin/true"] Jan 29 09:02:23 crc kubenswrapper[4895]: E0129 09:02:23.614108 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763 is running failed: container process not found" containerID="b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763" cmd=["/bin/true"] Jan 29 09:02:23 crc kubenswrapper[4895]: E0129 09:02:23.615791 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763 is running failed: container process not found" containerID="b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763" cmd=["/bin/true"] Jan 29 09:02:23 crc kubenswrapper[4895]: E0129 09:02:23.615813 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763 is running failed: container process not found" containerID="b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763" cmd=["/bin/true"] Jan 29 09:02:23 crc kubenswrapper[4895]: E0129 09:02:23.616299 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763 is running failed: container process not found" containerID="b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763" cmd=["/bin/true"] Jan 29 09:02:23 crc kubenswrapper[4895]: E0129 09:02:23.616331 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763 is running failed: container process not found" containerID="b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763" cmd=["/bin/true"] Jan 29 09:02:23 crc kubenswrapper[4895]: E0129 09:02:23.616391 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763 is running failed: container process not found" probeType="Readiness" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" podUID="844ab9b8-4b72-401d-b008-db11605452a8" containerName="ironic-neutron-agent" Jan 29 09:02:23 crc kubenswrapper[4895]: E0129 09:02:23.616345 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763 is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" podUID="844ab9b8-4b72-401d-b008-db11605452a8" containerName="ironic-neutron-agent" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.233972 4895 generic.go:334] "Generic (PLEG): container finished" podID="844ab9b8-4b72-401d-b008-db11605452a8" containerID="b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763" exitCode=1 Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.234120 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" event={"ID":"844ab9b8-4b72-401d-b008-db11605452a8","Type":"ContainerDied","Data":"b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763"} Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.234231 4895 scope.go:117] "RemoveContainer" containerID="77fa1e99c07ec72511bcebcb5fa3fab46d5b1010872dfe3f6de803d0a9842f34" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.235259 4895 scope.go:117] "RemoveContainer" containerID="b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763" Jan 29 09:02:24 crc kubenswrapper[4895]: E0129 09:02:24.235627 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-78c59f886f-xtrfg_openstack(844ab9b8-4b72-401d-b008-db11605452a8)\"" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" podUID="844ab9b8-4b72-401d-b008-db11605452a8" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.496218 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.744380 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5fb7b47b77-cq2p9"] Jan 29 09:02:24 crc kubenswrapper[4895]: E0129 09:02:24.745212 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerName="ironic-api-log" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.745234 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerName="ironic-api-log" Jan 29 09:02:24 crc kubenswrapper[4895]: E0129 09:02:24.745252 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerName="init" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.745259 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerName="init" Jan 29 09:02:24 crc kubenswrapper[4895]: E0129 09:02:24.745282 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerName="ironic-api" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.745291 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerName="ironic-api" Jan 29 09:02:24 crc kubenswrapper[4895]: E0129 09:02:24.745305 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerName="ironic-api" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.745311 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerName="ironic-api" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.745573 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerName="ironic-api-log" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.745625 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerName="ironic-api" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.746062 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" containerName="ironic-api" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.747807 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.751654 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.752388 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.752566 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.781901 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5fb7b47b77-cq2p9"] Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.818046 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-log-httpd\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.818137 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-internal-tls-certs\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.818164 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-public-tls-certs\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.818249 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqrwb\" (UniqueName: \"kubernetes.io/projected/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-kube-api-access-vqrwb\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.818272 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-combined-ca-bundle\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.818294 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-etc-swift\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.818417 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-config-data\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.818445 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-run-httpd\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.927063 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-config-data\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.927636 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-run-httpd\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.927755 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-log-httpd\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.927863 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-internal-tls-certs\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.927962 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-public-tls-certs\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.928172 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqrwb\" (UniqueName: \"kubernetes.io/projected/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-kube-api-access-vqrwb\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.928252 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-combined-ca-bundle\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.928349 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-etc-swift\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.929604 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-run-httpd\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.929830 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-log-httpd\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.936696 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-internal-tls-certs\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.937751 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-config-data\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.938850 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-public-tls-certs\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.941005 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-combined-ca-bundle\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.944853 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-etc-swift\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:24 crc kubenswrapper[4895]: I0129 09:02:24.950913 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqrwb\" (UniqueName: \"kubernetes.io/projected/cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687-kube-api-access-vqrwb\") pod \"swift-proxy-5fb7b47b77-cq2p9\" (UID: \"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687\") " pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:25 crc kubenswrapper[4895]: I0129 09:02:25.091898 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:25 crc kubenswrapper[4895]: I0129 09:02:25.241470 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce22da7-d86c-4a45-97ca-f67baee5d1fc" path="/var/lib/kubelet/pods/7ce22da7-d86c-4a45-97ca-f67baee5d1fc/volumes" Jan 29 09:02:26 crc kubenswrapper[4895]: I0129 09:02:26.175828 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5fb7b47b77-cq2p9"] Jan 29 09:02:26 crc kubenswrapper[4895]: W0129 09:02:26.196690 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfbdf3a1_a1a5_45af_87ee_c49eb5f9f687.slice/crio-d9f5c26b38e9f3d563d7c376297080cb028980281ee986a97413da946392a359 WatchSource:0}: Error finding container d9f5c26b38e9f3d563d7c376297080cb028980281ee986a97413da946392a359: Status 404 returned error can't find the container with id d9f5c26b38e9f3d563d7c376297080cb028980281ee986a97413da946392a359 Jan 29 09:02:26 crc kubenswrapper[4895]: I0129 09:02:26.323515 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb7b47b77-cq2p9" event={"ID":"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687","Type":"ContainerStarted","Data":"d9f5c26b38e9f3d563d7c376297080cb028980281ee986a97413da946392a359"} Jan 29 09:02:26 crc kubenswrapper[4895]: I0129 09:02:26.398462 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-v7z9b" event={"ID":"1318c5c6-26bf-46e6-aba5-ab4e024be588","Type":"ContainerStarted","Data":"88b21480dca21eab1a64888e5b3df62103995f907973c8161c3de0b2b23d0e38"} Jan 29 09:02:26 crc kubenswrapper[4895]: I0129 09:02:26.443831 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-v7z9b" podStartSLOduration=3.453833445 podStartE2EDuration="9.44380646s" podCreationTimestamp="2026-01-29 09:02:17 +0000 UTC" firstStartedPulling="2026-01-29 09:02:19.360752868 +0000 UTC m=+1281.002261014" lastFinishedPulling="2026-01-29 09:02:25.350725883 +0000 UTC m=+1286.992234029" observedRunningTime="2026-01-29 09:02:26.426424934 +0000 UTC m=+1288.067933080" watchObservedRunningTime="2026-01-29 09:02:26.44380646 +0000 UTC m=+1288.085314606" Jan 29 09:02:27 crc kubenswrapper[4895]: I0129 09:02:27.414156 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb7b47b77-cq2p9" event={"ID":"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687","Type":"ContainerStarted","Data":"1a43a372f9abdc11efc06301202f87848523b8b18265dd39971b8f187413db6d"} Jan 29 09:02:27 crc kubenswrapper[4895]: I0129 09:02:27.414647 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb7b47b77-cq2p9" event={"ID":"cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687","Type":"ContainerStarted","Data":"5cb72001bb493666b063b5b1f37c94354f3ef57536c9c8eee7fe5b7af591b1d0"} Jan 29 09:02:27 crc kubenswrapper[4895]: I0129 09:02:27.415365 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:27 crc kubenswrapper[4895]: I0129 09:02:27.415417 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:27 crc kubenswrapper[4895]: I0129 09:02:27.448911 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5fb7b47b77-cq2p9" podStartSLOduration=3.448884669 podStartE2EDuration="3.448884669s" podCreationTimestamp="2026-01-29 09:02:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:02:27.443212487 +0000 UTC m=+1289.084720643" watchObservedRunningTime="2026-01-29 09:02:27.448884669 +0000 UTC m=+1289.090392815" Jan 29 09:02:27 crc kubenswrapper[4895]: I0129 09:02:27.763594 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:27 crc kubenswrapper[4895]: I0129 09:02:27.764015 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="ceilometer-central-agent" containerID="cri-o://c95ffdcd861fc1f11ac608f9c9e6f5dc2543639d7660779fb9ee4cba6a010ddd" gracePeriod=30 Jan 29 09:02:27 crc kubenswrapper[4895]: I0129 09:02:27.764828 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="proxy-httpd" containerID="cri-o://311af08ae17256cc378e2896cb6c9536bd9860885daab02179edd3c8141b5d00" gracePeriod=30 Jan 29 09:02:27 crc kubenswrapper[4895]: I0129 09:02:27.765153 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="sg-core" containerID="cri-o://36e4a3c0a319e7349b63292e5299578ef2d7528004c704a0cbab5e1dd3e6ea70" gracePeriod=30 Jan 29 09:02:27 crc kubenswrapper[4895]: I0129 09:02:27.765379 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="ceilometer-notification-agent" containerID="cri-o://a4cd4bd94745299566c79a20695635d3af9886e3e850bd2458640ad3deb9376c" gracePeriod=30 Jan 29 09:02:28 crc kubenswrapper[4895]: I0129 09:02:28.468615 4895 generic.go:334] "Generic (PLEG): container finished" podID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerID="311af08ae17256cc378e2896cb6c9536bd9860885daab02179edd3c8141b5d00" exitCode=0 Jan 29 09:02:28 crc kubenswrapper[4895]: I0129 09:02:28.468679 4895 generic.go:334] "Generic (PLEG): container finished" podID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerID="36e4a3c0a319e7349b63292e5299578ef2d7528004c704a0cbab5e1dd3e6ea70" exitCode=2 Jan 29 09:02:28 crc kubenswrapper[4895]: I0129 09:02:28.468690 4895 generic.go:334] "Generic (PLEG): container finished" podID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerID="c95ffdcd861fc1f11ac608f9c9e6f5dc2543639d7660779fb9ee4cba6a010ddd" exitCode=0 Jan 29 09:02:28 crc kubenswrapper[4895]: I0129 09:02:28.469104 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"310c3619-a1a7-4137-8e90-4646ec724cb8","Type":"ContainerDied","Data":"311af08ae17256cc378e2896cb6c9536bd9860885daab02179edd3c8141b5d00"} Jan 29 09:02:28 crc kubenswrapper[4895]: I0129 09:02:28.469183 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"310c3619-a1a7-4137-8e90-4646ec724cb8","Type":"ContainerDied","Data":"36e4a3c0a319e7349b63292e5299578ef2d7528004c704a0cbab5e1dd3e6ea70"} Jan 29 09:02:28 crc kubenswrapper[4895]: I0129 09:02:28.469196 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"310c3619-a1a7-4137-8e90-4646ec724cb8","Type":"ContainerDied","Data":"c95ffdcd861fc1f11ac608f9c9e6f5dc2543639d7660779fb9ee4cba6a010ddd"} Jan 29 09:02:28 crc kubenswrapper[4895]: I0129 09:02:28.612079 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:28 crc kubenswrapper[4895]: I0129 09:02:28.613074 4895 scope.go:117] "RemoveContainer" containerID="b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763" Jan 29 09:02:28 crc kubenswrapper[4895]: E0129 09:02:28.613392 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-78c59f886f-xtrfg_openstack(844ab9b8-4b72-401d-b008-db11605452a8)\"" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" podUID="844ab9b8-4b72-401d-b008-db11605452a8" Jan 29 09:02:29 crc kubenswrapper[4895]: I0129 09:02:29.829346 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 09:02:32 crc kubenswrapper[4895]: I0129 09:02:32.392607 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 09:02:32 crc kubenswrapper[4895]: I0129 09:02:32.715063 4895 generic.go:334] "Generic (PLEG): container finished" podID="1318c5c6-26bf-46e6-aba5-ab4e024be588" containerID="88b21480dca21eab1a64888e5b3df62103995f907973c8161c3de0b2b23d0e38" exitCode=0 Jan 29 09:02:32 crc kubenswrapper[4895]: I0129 09:02:32.715617 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-v7z9b" event={"ID":"1318c5c6-26bf-46e6-aba5-ab4e024be588","Type":"ContainerDied","Data":"88b21480dca21eab1a64888e5b3df62103995f907973c8161c3de0b2b23d0e38"} Jan 29 09:02:32 crc kubenswrapper[4895]: I0129 09:02:32.727106 4895 generic.go:334] "Generic (PLEG): container finished" podID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerID="a4cd4bd94745299566c79a20695635d3af9886e3e850bd2458640ad3deb9376c" exitCode=0 Jan 29 09:02:32 crc kubenswrapper[4895]: I0129 09:02:32.727167 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"310c3619-a1a7-4137-8e90-4646ec724cb8","Type":"ContainerDied","Data":"a4cd4bd94745299566c79a20695635d3af9886e3e850bd2458640ad3deb9376c"} Jan 29 09:02:33 crc kubenswrapper[4895]: I0129 09:02:33.744996 4895 generic.go:334] "Generic (PLEG): container finished" podID="df4d1b85-39a1-46ad-8a21-60a165dbbf6d" containerID="396eda2b3209074288f3195193f5775a3b1ffa1e6e4ee4854b7f6fad7a771887" exitCode=0 Jan 29 09:02:33 crc kubenswrapper[4895]: I0129 09:02:33.745074 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c686984cb-9nzt7" event={"ID":"df4d1b85-39a1-46ad-8a21-60a165dbbf6d","Type":"ContainerDied","Data":"396eda2b3209074288f3195193f5775a3b1ffa1e6e4ee4854b7f6fad7a771887"} Jan 29 09:02:35 crc kubenswrapper[4895]: I0129 09:02:35.100113 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:35 crc kubenswrapper[4895]: I0129 09:02:35.101628 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5fb7b47b77-cq2p9" Jan 29 09:02:36 crc kubenswrapper[4895]: I0129 09:02:36.210512 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.170:3000/\": dial tcp 10.217.0.170:3000: connect: connection refused" Jan 29 09:02:43 crc kubenswrapper[4895]: I0129 09:02:43.211694 4895 scope.go:117] "RemoveContainer" containerID="b7658a488632724e9f1fe1e11b2364b0c8348b1756a84fd23795c2b049df7763" Jan 29 09:02:44 crc kubenswrapper[4895]: E0129 09:02:44.846099 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 29 09:02:44 crc kubenswrapper[4895]: E0129 09:02:44.846832 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n85h58dh57ch677hdbh587h65bh55bh66ch58bh579h5h699h658h54ch55ch688h565h9h65h58bh54h79h8bh5d4hf8h689h7h54ch599h5d4h694q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fns4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(f10bf685-c7de-4126-afc5-6bd68c3e8845): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:02:44 crc kubenswrapper[4895]: E0129 09:02:44.847999 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="f10bf685-c7de-4126-afc5-6bd68c3e8845" Jan 29 09:02:44 crc kubenswrapper[4895]: I0129 09:02:44.864781 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:02:44 crc kubenswrapper[4895]: I0129 09:02:44.865135 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="53f417a8-012d-4063-b1b8-e60f50fbf8ae" containerName="glance-log" containerID="cri-o://728474e3554cd05d244bf875d5bf9615eaa97a07cfe2f3dd31a5c25d884b9b93" gracePeriod=30 Jan 29 09:02:44 crc kubenswrapper[4895]: I0129 09:02:44.865779 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="53f417a8-012d-4063-b1b8-e60f50fbf8ae" containerName="glance-httpd" containerID="cri-o://504cc3f5f228f93b4ad6e247c43ab7a494129cc60fef4bd8d37e9a649d0e74ff" gracePeriod=30 Jan 29 09:02:44 crc kubenswrapper[4895]: I0129 09:02:44.940549 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-v7z9b" event={"ID":"1318c5c6-26bf-46e6-aba5-ab4e024be588","Type":"ContainerDied","Data":"638e0c5b9fd9965ce3dcc2200a5c9e17a12f1d24b0a6e75195024235796c9b04"} Jan 29 09:02:44 crc kubenswrapper[4895]: I0129 09:02:44.940605 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="638e0c5b9fd9965ce3dcc2200a5c9e17a12f1d24b0a6e75195024235796c9b04" Jan 29 09:02:44 crc kubenswrapper[4895]: I0129 09:02:44.947336 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:44 crc kubenswrapper[4895]: I0129 09:02:44.996592 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrdf6\" (UniqueName: \"kubernetes.io/projected/1318c5c6-26bf-46e6-aba5-ab4e024be588-kube-api-access-nrdf6\") pod \"1318c5c6-26bf-46e6-aba5-ab4e024be588\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " Jan 29 09:02:44 crc kubenswrapper[4895]: I0129 09:02:44.999691 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-combined-ca-bundle\") pod \"1318c5c6-26bf-46e6-aba5-ab4e024be588\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " Jan 29 09:02:44 crc kubenswrapper[4895]: I0129 09:02:44.999800 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/1318c5c6-26bf-46e6-aba5-ab4e024be588-var-lib-ironic\") pod \"1318c5c6-26bf-46e6-aba5-ab4e024be588\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " Jan 29 09:02:44 crc kubenswrapper[4895]: I0129 09:02:44.999889 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-scripts\") pod \"1318c5c6-26bf-46e6-aba5-ab4e024be588\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " Jan 29 09:02:44 crc kubenswrapper[4895]: I0129 09:02:44.999937 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/1318c5c6-26bf-46e6-aba5-ab4e024be588-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"1318c5c6-26bf-46e6-aba5-ab4e024be588\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.000254 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1318c5c6-26bf-46e6-aba5-ab4e024be588-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "1318c5c6-26bf-46e6-aba5-ab4e024be588" (UID: "1318c5c6-26bf-46e6-aba5-ab4e024be588"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.000478 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1318c5c6-26bf-46e6-aba5-ab4e024be588-etc-podinfo\") pod \"1318c5c6-26bf-46e6-aba5-ab4e024be588\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.000519 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-config\") pod \"1318c5c6-26bf-46e6-aba5-ab4e024be588\" (UID: \"1318c5c6-26bf-46e6-aba5-ab4e024be588\") " Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.000660 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1318c5c6-26bf-46e6-aba5-ab4e024be588-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "1318c5c6-26bf-46e6-aba5-ab4e024be588" (UID: "1318c5c6-26bf-46e6-aba5-ab4e024be588"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.001644 4895 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/1318c5c6-26bf-46e6-aba5-ab4e024be588-var-lib-ironic\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.001669 4895 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/1318c5c6-26bf-46e6-aba5-ab4e024be588-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.013160 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-scripts" (OuterVolumeSpecName: "scripts") pod "1318c5c6-26bf-46e6-aba5-ab4e024be588" (UID: "1318c5c6-26bf-46e6-aba5-ab4e024be588"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.013160 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1318c5c6-26bf-46e6-aba5-ab4e024be588-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "1318c5c6-26bf-46e6-aba5-ab4e024be588" (UID: "1318c5c6-26bf-46e6-aba5-ab4e024be588"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.018526 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1318c5c6-26bf-46e6-aba5-ab4e024be588-kube-api-access-nrdf6" (OuterVolumeSpecName: "kube-api-access-nrdf6") pod "1318c5c6-26bf-46e6-aba5-ab4e024be588" (UID: "1318c5c6-26bf-46e6-aba5-ab4e024be588"). InnerVolumeSpecName "kube-api-access-nrdf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.062968 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1318c5c6-26bf-46e6-aba5-ab4e024be588" (UID: "1318c5c6-26bf-46e6-aba5-ab4e024be588"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.267279 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.267455 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.267517 4895 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1318c5c6-26bf-46e6-aba5-ab4e024be588-etc-podinfo\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.267576 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrdf6\" (UniqueName: \"kubernetes.io/projected/1318c5c6-26bf-46e6-aba5-ab4e024be588-kube-api-access-nrdf6\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:45 crc kubenswrapper[4895]: E0129 09:02:45.274260 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="f10bf685-c7de-4126-afc5-6bd68c3e8845" Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.283110 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-config" (OuterVolumeSpecName: "config") pod "1318c5c6-26bf-46e6-aba5-ab4e024be588" (UID: "1318c5c6-26bf-46e6-aba5-ab4e024be588"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.372443 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1318c5c6-26bf-46e6-aba5-ab4e024be588-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.983701 4895 generic.go:334] "Generic (PLEG): container finished" podID="53f417a8-012d-4063-b1b8-e60f50fbf8ae" containerID="728474e3554cd05d244bf875d5bf9615eaa97a07cfe2f3dd31a5c25d884b9b93" exitCode=143 Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.983771 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53f417a8-012d-4063-b1b8-e60f50fbf8ae","Type":"ContainerDied","Data":"728474e3554cd05d244bf875d5bf9615eaa97a07cfe2f3dd31a5c25d884b9b93"} Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.994289 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" event={"ID":"844ab9b8-4b72-401d-b008-db11605452a8","Type":"ContainerStarted","Data":"42d2a4e879c151efa37c86c19a649180d54a1393c4fc2b0ea2113843fde93033"} Jan 29 09:02:45 crc kubenswrapper[4895]: I0129 09:02:45.994362 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-v7z9b" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.151330 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.270361 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.300013 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-scripts\") pod \"310c3619-a1a7-4137-8e90-4646ec724cb8\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.300165 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq2tj\" (UniqueName: \"kubernetes.io/projected/310c3619-a1a7-4137-8e90-4646ec724cb8-kube-api-access-gq2tj\") pod \"310c3619-a1a7-4137-8e90-4646ec724cb8\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.300218 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/310c3619-a1a7-4137-8e90-4646ec724cb8-run-httpd\") pod \"310c3619-a1a7-4137-8e90-4646ec724cb8\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.300260 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-config-data\") pod \"310c3619-a1a7-4137-8e90-4646ec724cb8\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.300480 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-combined-ca-bundle\") pod \"310c3619-a1a7-4137-8e90-4646ec724cb8\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.300539 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-sg-core-conf-yaml\") pod \"310c3619-a1a7-4137-8e90-4646ec724cb8\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.300653 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/310c3619-a1a7-4137-8e90-4646ec724cb8-log-httpd\") pod \"310c3619-a1a7-4137-8e90-4646ec724cb8\" (UID: \"310c3619-a1a7-4137-8e90-4646ec724cb8\") " Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.302426 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310c3619-a1a7-4137-8e90-4646ec724cb8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "310c3619-a1a7-4137-8e90-4646ec724cb8" (UID: "310c3619-a1a7-4137-8e90-4646ec724cb8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.302453 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310c3619-a1a7-4137-8e90-4646ec724cb8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "310c3619-a1a7-4137-8e90-4646ec724cb8" (UID: "310c3619-a1a7-4137-8e90-4646ec724cb8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.310155 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310c3619-a1a7-4137-8e90-4646ec724cb8-kube-api-access-gq2tj" (OuterVolumeSpecName: "kube-api-access-gq2tj") pod "310c3619-a1a7-4137-8e90-4646ec724cb8" (UID: "310c3619-a1a7-4137-8e90-4646ec724cb8"). InnerVolumeSpecName "kube-api-access-gq2tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.339496 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-scripts" (OuterVolumeSpecName: "scripts") pod "310c3619-a1a7-4137-8e90-4646ec724cb8" (UID: "310c3619-a1a7-4137-8e90-4646ec724cb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.407552 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-combined-ca-bundle\") pod \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.407746 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqtgz\" (UniqueName: \"kubernetes.io/projected/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-kube-api-access-gqtgz\") pod \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.407804 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-ovndb-tls-certs\") pod \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.407831 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-config\") pod \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.408031 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-httpd-config\") pod \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\" (UID: \"df4d1b85-39a1-46ad-8a21-60a165dbbf6d\") " Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.408807 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/310c3619-a1a7-4137-8e90-4646ec724cb8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.408821 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/310c3619-a1a7-4137-8e90-4646ec724cb8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.408831 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.408843 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq2tj\" (UniqueName: \"kubernetes.io/projected/310c3619-a1a7-4137-8e90-4646ec724cb8-kube-api-access-gq2tj\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.417744 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "df4d1b85-39a1-46ad-8a21-60a165dbbf6d" (UID: "df4d1b85-39a1-46ad-8a21-60a165dbbf6d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.435045 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "310c3619-a1a7-4137-8e90-4646ec724cb8" (UID: "310c3619-a1a7-4137-8e90-4646ec724cb8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.439209 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-kube-api-access-gqtgz" (OuterVolumeSpecName: "kube-api-access-gqtgz") pod "df4d1b85-39a1-46ad-8a21-60a165dbbf6d" (UID: "df4d1b85-39a1-46ad-8a21-60a165dbbf6d"). InnerVolumeSpecName "kube-api-access-gqtgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.500870 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Jan 29 09:02:46 crc kubenswrapper[4895]: E0129 09:02:46.502278 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df4d1b85-39a1-46ad-8a21-60a165dbbf6d" containerName="neutron-httpd" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.502302 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="df4d1b85-39a1-46ad-8a21-60a165dbbf6d" containerName="neutron-httpd" Jan 29 09:02:46 crc kubenswrapper[4895]: E0129 09:02:46.502317 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="ceilometer-notification-agent" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.502325 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="ceilometer-notification-agent" Jan 29 09:02:46 crc kubenswrapper[4895]: E0129 09:02:46.502339 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df4d1b85-39a1-46ad-8a21-60a165dbbf6d" containerName="neutron-api" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.502345 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="df4d1b85-39a1-46ad-8a21-60a165dbbf6d" containerName="neutron-api" Jan 29 09:02:46 crc kubenswrapper[4895]: E0129 09:02:46.502356 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="proxy-httpd" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.502362 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="proxy-httpd" Jan 29 09:02:46 crc kubenswrapper[4895]: E0129 09:02:46.502377 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="ceilometer-central-agent" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.502383 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="ceilometer-central-agent" Jan 29 09:02:46 crc kubenswrapper[4895]: E0129 09:02:46.502395 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="sg-core" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.502401 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="sg-core" Jan 29 09:02:46 crc kubenswrapper[4895]: E0129 09:02:46.502429 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1318c5c6-26bf-46e6-aba5-ab4e024be588" containerName="ironic-inspector-db-sync" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.502435 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1318c5c6-26bf-46e6-aba5-ab4e024be588" containerName="ironic-inspector-db-sync" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.502656 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="proxy-httpd" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.502670 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="df4d1b85-39a1-46ad-8a21-60a165dbbf6d" containerName="neutron-httpd" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.502680 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1318c5c6-26bf-46e6-aba5-ab4e024be588" containerName="ironic-inspector-db-sync" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.502701 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="ceilometer-central-agent" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.502711 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="sg-core" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.502721 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="df4d1b85-39a1-46ad-8a21-60a165dbbf6d" containerName="neutron-api" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.502735 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" containerName="ceilometer-notification-agent" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.534086 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.539298 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqtgz\" (UniqueName: \"kubernetes.io/projected/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-kube-api-access-gqtgz\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.539452 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.552130 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.561324 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.562242 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.602735 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.606783 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "310c3619-a1a7-4137-8e90-4646ec724cb8" (UID: "310c3619-a1a7-4137-8e90-4646ec724cb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.608747 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df4d1b85-39a1-46ad-8a21-60a165dbbf6d" (UID: "df4d1b85-39a1-46ad-8a21-60a165dbbf6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.616473 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-config-data" (OuterVolumeSpecName: "config-data") pod "310c3619-a1a7-4137-8e90-4646ec724cb8" (UID: "310c3619-a1a7-4137-8e90-4646ec724cb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.635606 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-config" (OuterVolumeSpecName: "config") pod "df4d1b85-39a1-46ad-8a21-60a165dbbf6d" (UID: "df4d1b85-39a1-46ad-8a21-60a165dbbf6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.644555 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.644625 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.644640 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.644671 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310c3619-a1a7-4137-8e90-4646ec724cb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.685762 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "df4d1b85-39a1-46ad-8a21-60a165dbbf6d" (UID: "df4d1b85-39a1-46ad-8a21-60a165dbbf6d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.747329 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-config\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.747398 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/42c2646f-8f1b-4357-8f80-b339103c8d5d-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.747502 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-scripts\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.747524 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.747558 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvs5f\" (UniqueName: \"kubernetes.io/projected/42c2646f-8f1b-4357-8f80-b339103c8d5d-kube-api-access-qvs5f\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.748571 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/42c2646f-8f1b-4357-8f80-b339103c8d5d-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.748822 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/42c2646f-8f1b-4357-8f80-b339103c8d5d-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.749124 4895 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d1b85-39a1-46ad-8a21-60a165dbbf6d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.956887 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/42c2646f-8f1b-4357-8f80-b339103c8d5d-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.957078 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-scripts\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.957109 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.957148 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvs5f\" (UniqueName: \"kubernetes.io/projected/42c2646f-8f1b-4357-8f80-b339103c8d5d-kube-api-access-qvs5f\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.957181 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/42c2646f-8f1b-4357-8f80-b339103c8d5d-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.957327 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/42c2646f-8f1b-4357-8f80-b339103c8d5d-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.957452 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-config\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.958378 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/42c2646f-8f1b-4357-8f80-b339103c8d5d-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.959232 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/42c2646f-8f1b-4357-8f80-b339103c8d5d-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.963232 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.967416 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/42c2646f-8f1b-4357-8f80-b339103c8d5d-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.968734 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-config\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.981055 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-scripts\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:46 crc kubenswrapper[4895]: I0129 09:02:46.988622 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvs5f\" (UniqueName: \"kubernetes.io/projected/42c2646f-8f1b-4357-8f80-b339103c8d5d-kube-api-access-qvs5f\") pod \"ironic-inspector-0\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " pod="openstack/ironic-inspector-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.017761 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f893b3e3-3833-4a94-ab55-951f600fdadd","Type":"ContainerStarted","Data":"8a2ecd7e5c32674941488ecb525ee418a131401934f9bf38507231590b7030d3"} Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.024695 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c686984cb-9nzt7" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.025760 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c686984cb-9nzt7" event={"ID":"df4d1b85-39a1-46ad-8a21-60a165dbbf6d","Type":"ContainerDied","Data":"27d7332e96d9fef1bb1670210e6e7c6c47e9d68c1cf319dced45eb71c8833bdc"} Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.025852 4895 scope.go:117] "RemoveContainer" containerID="37d6efe9ff64dc67d137d7446d909566f2a8b08d88bbbc5bfa19e50d85ce14ed" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.060815 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"310c3619-a1a7-4137-8e90-4646ec724cb8","Type":"ContainerDied","Data":"48ac0713e02890edee64f015ebeb24ef9446bef28cd41fc0fe3ea5e6c7967826"} Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.060957 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.115205 4895 scope.go:117] "RemoveContainer" containerID="396eda2b3209074288f3195193f5775a3b1ffa1e6e4ee4854b7f6fad7a771887" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.150045 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c686984cb-9nzt7"] Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.160188 4895 scope.go:117] "RemoveContainer" containerID="311af08ae17256cc378e2896cb6c9536bd9860885daab02179edd3c8141b5d00" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.170283 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6c686984cb-9nzt7"] Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.188715 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.191764 4895 scope.go:117] "RemoveContainer" containerID="36e4a3c0a319e7349b63292e5299578ef2d7528004c704a0cbab5e1dd3e6ea70" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.201371 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.212560 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.225048 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310c3619-a1a7-4137-8e90-4646ec724cb8" path="/var/lib/kubelet/pods/310c3619-a1a7-4137-8e90-4646ec724cb8/volumes" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.226033 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df4d1b85-39a1-46ad-8a21-60a165dbbf6d" path="/var/lib/kubelet/pods/df4d1b85-39a1-46ad-8a21-60a165dbbf6d/volumes" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.226862 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.231094 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.232969 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.238514 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.240047 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.280314 4895 scope.go:117] "RemoveContainer" containerID="a4cd4bd94745299566c79a20695635d3af9886e3e850bd2458640ad3deb9376c" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.325131 4895 scope.go:117] "RemoveContainer" containerID="c95ffdcd861fc1f11ac608f9c9e6f5dc2543639d7660779fb9ee4cba6a010ddd" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.376276 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3048dd8-2192-435c-a25f-8823906061ac-run-httpd\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.376360 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.376573 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-config-data\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.376601 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-scripts\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.376634 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmcbn\" (UniqueName: \"kubernetes.io/projected/a3048dd8-2192-435c-a25f-8823906061ac-kube-api-access-mmcbn\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.376671 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.376711 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3048dd8-2192-435c-a25f-8823906061ac-log-httpd\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.480263 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-config-data\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.480321 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-scripts\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.480360 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmcbn\" (UniqueName: \"kubernetes.io/projected/a3048dd8-2192-435c-a25f-8823906061ac-kube-api-access-mmcbn\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.480405 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.480451 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3048dd8-2192-435c-a25f-8823906061ac-log-httpd\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.480495 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3048dd8-2192-435c-a25f-8823906061ac-run-httpd\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.480527 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.482005 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3048dd8-2192-435c-a25f-8823906061ac-run-httpd\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.483145 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3048dd8-2192-435c-a25f-8823906061ac-log-httpd\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.488819 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.489360 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-scripts\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.492851 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.509206 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-config-data\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.512495 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmcbn\" (UniqueName: \"kubernetes.io/projected/a3048dd8-2192-435c-a25f-8823906061ac-kube-api-access-mmcbn\") pod \"ceilometer-0\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.580641 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.883822 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Jan 29 09:02:47 crc kubenswrapper[4895]: I0129 09:02:47.970167 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:48 crc kubenswrapper[4895]: I0129 09:02:48.083542 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3048dd8-2192-435c-a25f-8823906061ac","Type":"ContainerStarted","Data":"674bcf1a04be26187c4860edb25d417c1d5e158abe430bb6e9db5fbb792f2034"} Jan 29 09:02:48 crc kubenswrapper[4895]: I0129 09:02:48.087398 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"42c2646f-8f1b-4357-8f80-b339103c8d5d","Type":"ContainerStarted","Data":"1b1aa54ad4574507f0d8069bcb3bdb63304c1ef61e0ca0822e6be0b1a329fdc0"} Jan 29 09:02:48 crc kubenswrapper[4895]: I0129 09:02:48.610841 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.064506 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-78c59f886f-xtrfg" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.149395 4895 generic.go:334] "Generic (PLEG): container finished" podID="53f417a8-012d-4063-b1b8-e60f50fbf8ae" containerID="504cc3f5f228f93b4ad6e247c43ab7a494129cc60fef4bd8d37e9a649d0e74ff" exitCode=0 Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.149523 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53f417a8-012d-4063-b1b8-e60f50fbf8ae","Type":"ContainerDied","Data":"504cc3f5f228f93b4ad6e247c43ab7a494129cc60fef4bd8d37e9a649d0e74ff"} Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.163777 4895 generic.go:334] "Generic (PLEG): container finished" podID="42c2646f-8f1b-4357-8f80-b339103c8d5d" containerID="645317a1718bad3ffc7de1be3da22b9de6d77dbf83d4da91dff5a115ddead8cd" exitCode=0 Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.164039 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"42c2646f-8f1b-4357-8f80-b339103c8d5d","Type":"ContainerDied","Data":"645317a1718bad3ffc7de1be3da22b9de6d77dbf83d4da91dff5a115ddead8cd"} Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.439204 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.449648 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-scripts\") pod \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.449725 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4whcw\" (UniqueName: \"kubernetes.io/projected/53f417a8-012d-4063-b1b8-e60f50fbf8ae-kube-api-access-4whcw\") pod \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.449796 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53f417a8-012d-4063-b1b8-e60f50fbf8ae-httpd-run\") pod \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.449821 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-combined-ca-bundle\") pod \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.449884 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f417a8-012d-4063-b1b8-e60f50fbf8ae-logs\") pod \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.449964 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-config-data\") pod \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.449995 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.450028 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-public-tls-certs\") pod \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\" (UID: \"53f417a8-012d-4063-b1b8-e60f50fbf8ae\") " Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.455284 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53f417a8-012d-4063-b1b8-e60f50fbf8ae-logs" (OuterVolumeSpecName: "logs") pod "53f417a8-012d-4063-b1b8-e60f50fbf8ae" (UID: "53f417a8-012d-4063-b1b8-e60f50fbf8ae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.455476 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53f417a8-012d-4063-b1b8-e60f50fbf8ae-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "53f417a8-012d-4063-b1b8-e60f50fbf8ae" (UID: "53f417a8-012d-4063-b1b8-e60f50fbf8ae"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.508340 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f417a8-012d-4063-b1b8-e60f50fbf8ae-kube-api-access-4whcw" (OuterVolumeSpecName: "kube-api-access-4whcw") pod "53f417a8-012d-4063-b1b8-e60f50fbf8ae" (UID: "53f417a8-012d-4063-b1b8-e60f50fbf8ae"). InnerVolumeSpecName "kube-api-access-4whcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.513091 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "53f417a8-012d-4063-b1b8-e60f50fbf8ae" (UID: "53f417a8-012d-4063-b1b8-e60f50fbf8ae"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.513204 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-scripts" (OuterVolumeSpecName: "scripts") pod "53f417a8-012d-4063-b1b8-e60f50fbf8ae" (UID: "53f417a8-012d-4063-b1b8-e60f50fbf8ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.541501 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53f417a8-012d-4063-b1b8-e60f50fbf8ae" (UID: "53f417a8-012d-4063-b1b8-e60f50fbf8ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.553802 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.553842 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4whcw\" (UniqueName: \"kubernetes.io/projected/53f417a8-012d-4063-b1b8-e60f50fbf8ae-kube-api-access-4whcw\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.553854 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53f417a8-012d-4063-b1b8-e60f50fbf8ae-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.553864 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.553872 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f417a8-012d-4063-b1b8-e60f50fbf8ae-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.553905 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.607289 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.624772 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "53f417a8-012d-4063-b1b8-e60f50fbf8ae" (UID: "53f417a8-012d-4063-b1b8-e60f50fbf8ae"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.628103 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-config-data" (OuterVolumeSpecName: "config-data") pod "53f417a8-012d-4063-b1b8-e60f50fbf8ae" (UID: "53f417a8-012d-4063-b1b8-e60f50fbf8ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.655778 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.655832 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:49 crc kubenswrapper[4895]: I0129 09:02:49.655851 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53f417a8-012d-4063-b1b8-e60f50fbf8ae-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.180134 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3048dd8-2192-435c-a25f-8823906061ac","Type":"ContainerStarted","Data":"8b803d19f0b22a5d8bfed61897b8d03efc523962f712390e93f6205606aa9697"} Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.184406 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53f417a8-012d-4063-b1b8-e60f50fbf8ae","Type":"ContainerDied","Data":"837a29b0bbe27816c8233f6ef08564f7e504846e39ca5c6158fac4b34ad2ae2b"} Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.184489 4895 scope.go:117] "RemoveContainer" containerID="504cc3f5f228f93b4ad6e247c43ab7a494129cc60fef4bd8d37e9a649d0e74ff" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.184712 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.244047 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.260429 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.280877 4895 scope.go:117] "RemoveContainer" containerID="728474e3554cd05d244bf875d5bf9615eaa97a07cfe2f3dd31a5c25d884b9b93" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.293384 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:02:50 crc kubenswrapper[4895]: E0129 09:02:50.298817 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f417a8-012d-4063-b1b8-e60f50fbf8ae" containerName="glance-log" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.298886 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f417a8-012d-4063-b1b8-e60f50fbf8ae" containerName="glance-log" Jan 29 09:02:50 crc kubenswrapper[4895]: E0129 09:02:50.298899 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f417a8-012d-4063-b1b8-e60f50fbf8ae" containerName="glance-httpd" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.298908 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f417a8-012d-4063-b1b8-e60f50fbf8ae" containerName="glance-httpd" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.299291 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f417a8-012d-4063-b1b8-e60f50fbf8ae" containerName="glance-httpd" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.299323 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f417a8-012d-4063-b1b8-e60f50fbf8ae" containerName="glance-log" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.300582 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.306872 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.307153 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.329111 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.475718 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd318cba-9380-4676-bb83-3256c9c5adf5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.475822 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd318cba-9380-4676-bb83-3256c9c5adf5-config-data\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.475963 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.475990 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd318cba-9380-4676-bb83-3256c9c5adf5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.476039 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd318cba-9380-4676-bb83-3256c9c5adf5-logs\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.476100 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd318cba-9380-4676-bb83-3256c9c5adf5-scripts\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.476150 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd318cba-9380-4676-bb83-3256c9c5adf5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.476181 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss6tx\" (UniqueName: \"kubernetes.io/projected/cd318cba-9380-4676-bb83-3256c9c5adf5-kube-api-access-ss6tx\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.547906 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.579530 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd318cba-9380-4676-bb83-3256c9c5adf5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.579631 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd318cba-9380-4676-bb83-3256c9c5adf5-config-data\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.579687 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.579712 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd318cba-9380-4676-bb83-3256c9c5adf5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.579765 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd318cba-9380-4676-bb83-3256c9c5adf5-logs\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.579827 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd318cba-9380-4676-bb83-3256c9c5adf5-scripts\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.579877 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd318cba-9380-4676-bb83-3256c9c5adf5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.579943 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss6tx\" (UniqueName: \"kubernetes.io/projected/cd318cba-9380-4676-bb83-3256c9c5adf5-kube-api-access-ss6tx\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.580425 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cd318cba-9380-4676-bb83-3256c9c5adf5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.580987 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd318cba-9380-4676-bb83-3256c9c5adf5-logs\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.582734 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.593467 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd318cba-9380-4676-bb83-3256c9c5adf5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.594200 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd318cba-9380-4676-bb83-3256c9c5adf5-config-data\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.604091 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd318cba-9380-4676-bb83-3256c9c5adf5-scripts\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.604942 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd318cba-9380-4676-bb83-3256c9c5adf5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.622041 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss6tx\" (UniqueName: \"kubernetes.io/projected/cd318cba-9380-4676-bb83-3256c9c5adf5-kube-api-access-ss6tx\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.661288 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"cd318cba-9380-4676-bb83-3256c9c5adf5\") " pod="openstack/glance-default-external-api-0" Jan 29 09:02:50 crc kubenswrapper[4895]: I0129 09:02:50.939134 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 09:02:51 crc kubenswrapper[4895]: I0129 09:02:51.248055 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53f417a8-012d-4063-b1b8-e60f50fbf8ae" path="/var/lib/kubelet/pods/53f417a8-012d-4063-b1b8-e60f50fbf8ae/volumes" Jan 29 09:02:51 crc kubenswrapper[4895]: I0129 09:02:51.249613 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3048dd8-2192-435c-a25f-8823906061ac","Type":"ContainerStarted","Data":"198155e5d6b3ab05adf4ec1b2deda980a6bc062a8baa4401f53a16fdffbfa3cc"} Jan 29 09:02:51 crc kubenswrapper[4895]: I0129 09:02:51.619546 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 09:02:51 crc kubenswrapper[4895]: W0129 09:02:51.635141 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd318cba_9380_4676_bb83_3256c9c5adf5.slice/crio-1098896f61f26775a846d25530c1f43d2b5eb74740875ce8dae0697676e3bc5d WatchSource:0}: Error finding container 1098896f61f26775a846d25530c1f43d2b5eb74740875ce8dae0697676e3bc5d: Status 404 returned error can't find the container with id 1098896f61f26775a846d25530c1f43d2b5eb74740875ce8dae0697676e3bc5d Jan 29 09:02:52 crc kubenswrapper[4895]: I0129 09:02:52.306849 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3048dd8-2192-435c-a25f-8823906061ac","Type":"ContainerStarted","Data":"241f0b80a30c71297e463e6c07f71a2b4b1e52ed8a4ad6ffb7ebc800dfdb4745"} Jan 29 09:02:52 crc kubenswrapper[4895]: I0129 09:02:52.308501 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd318cba-9380-4676-bb83-3256c9c5adf5","Type":"ContainerStarted","Data":"1098896f61f26775a846d25530c1f43d2b5eb74740875ce8dae0697676e3bc5d"} Jan 29 09:02:53 crc kubenswrapper[4895]: I0129 09:02:53.325112 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd318cba-9380-4676-bb83-3256c9c5adf5","Type":"ContainerStarted","Data":"be44b82c83291ebf57fdd8867b2a5e7c3ea8e6ca019dbf1a0b98d10ab52f4a19"} Jan 29 09:02:54 crc kubenswrapper[4895]: I0129 09:02:54.342373 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cd318cba-9380-4676-bb83-3256c9c5adf5","Type":"ContainerStarted","Data":"543c2e59e7d9842f514498c671c8d9389bf12f107997fc0b817b636d7281d2ec"} Jan 29 09:02:54 crc kubenswrapper[4895]: I0129 09:02:54.367876 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.367847101 podStartE2EDuration="4.367847101s" podCreationTimestamp="2026-01-29 09:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:02:54.361557001 +0000 UTC m=+1316.003065157" watchObservedRunningTime="2026-01-29 09:02:54.367847101 +0000 UTC m=+1316.009355237" Jan 29 09:02:55 crc kubenswrapper[4895]: I0129 09:02:55.491101 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:02:55 crc kubenswrapper[4895]: I0129 09:02:55.491460 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b56c025d-f59c-402b-8ad8-072e78d3b776" containerName="glance-log" containerID="cri-o://6340fa4e8500894fe5fdd4e8727a9c964dde4935e0e16ed68276fec030e46b14" gracePeriod=30 Jan 29 09:02:55 crc kubenswrapper[4895]: I0129 09:02:55.491605 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b56c025d-f59c-402b-8ad8-072e78d3b776" containerName="glance-httpd" containerID="cri-o://8e00c9b1edea6840545c4e3f417d54a872460f1668d288f2b62b3044b0eb35c5" gracePeriod=30 Jan 29 09:02:55 crc kubenswrapper[4895]: I0129 09:02:55.997074 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-fnzdv"] Jan 29 09:02:55 crc kubenswrapper[4895]: I0129 09:02:55.999633 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fnzdv" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.017226 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fnzdv"] Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.097001 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-msmz2"] Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.098865 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-msmz2" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.131994 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-msmz2"] Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.134540 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txgbx\" (UniqueName: \"kubernetes.io/projected/fc395a7a-25bb-46cd-89b0-9bbd5b1431f7-kube-api-access-txgbx\") pod \"nova-api-db-create-fnzdv\" (UID: \"fc395a7a-25bb-46cd-89b0-9bbd5b1431f7\") " pod="openstack/nova-api-db-create-fnzdv" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.134639 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc395a7a-25bb-46cd-89b0-9bbd5b1431f7-operator-scripts\") pod \"nova-api-db-create-fnzdv\" (UID: \"fc395a7a-25bb-46cd-89b0-9bbd5b1431f7\") " pod="openstack/nova-api-db-create-fnzdv" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.259252 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxj98\" (UniqueName: \"kubernetes.io/projected/5ad7a906-bb60-46a4-9cd1-edcdbc3db91d-kube-api-access-mxj98\") pod \"nova-cell0-db-create-msmz2\" (UID: \"5ad7a906-bb60-46a4-9cd1-edcdbc3db91d\") " pod="openstack/nova-cell0-db-create-msmz2" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.259419 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txgbx\" (UniqueName: \"kubernetes.io/projected/fc395a7a-25bb-46cd-89b0-9bbd5b1431f7-kube-api-access-txgbx\") pod \"nova-api-db-create-fnzdv\" (UID: \"fc395a7a-25bb-46cd-89b0-9bbd5b1431f7\") " pod="openstack/nova-api-db-create-fnzdv" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.259509 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc395a7a-25bb-46cd-89b0-9bbd5b1431f7-operator-scripts\") pod \"nova-api-db-create-fnzdv\" (UID: \"fc395a7a-25bb-46cd-89b0-9bbd5b1431f7\") " pod="openstack/nova-api-db-create-fnzdv" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.259551 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad7a906-bb60-46a4-9cd1-edcdbc3db91d-operator-scripts\") pod \"nova-cell0-db-create-msmz2\" (UID: \"5ad7a906-bb60-46a4-9cd1-edcdbc3db91d\") " pod="openstack/nova-cell0-db-create-msmz2" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.262619 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc395a7a-25bb-46cd-89b0-9bbd5b1431f7-operator-scripts\") pod \"nova-api-db-create-fnzdv\" (UID: \"fc395a7a-25bb-46cd-89b0-9bbd5b1431f7\") " pod="openstack/nova-api-db-create-fnzdv" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.343338 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txgbx\" (UniqueName: \"kubernetes.io/projected/fc395a7a-25bb-46cd-89b0-9bbd5b1431f7-kube-api-access-txgbx\") pod \"nova-api-db-create-fnzdv\" (UID: \"fc395a7a-25bb-46cd-89b0-9bbd5b1431f7\") " pod="openstack/nova-api-db-create-fnzdv" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.344146 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fnzdv" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.353968 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-dpcdv"] Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.362503 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxj98\" (UniqueName: \"kubernetes.io/projected/5ad7a906-bb60-46a4-9cd1-edcdbc3db91d-kube-api-access-mxj98\") pod \"nova-cell0-db-create-msmz2\" (UID: \"5ad7a906-bb60-46a4-9cd1-edcdbc3db91d\") " pod="openstack/nova-cell0-db-create-msmz2" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.362658 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad7a906-bb60-46a4-9cd1-edcdbc3db91d-operator-scripts\") pod \"nova-cell0-db-create-msmz2\" (UID: \"5ad7a906-bb60-46a4-9cd1-edcdbc3db91d\") " pod="openstack/nova-cell0-db-create-msmz2" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.363701 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad7a906-bb60-46a4-9cd1-edcdbc3db91d-operator-scripts\") pod \"nova-cell0-db-create-msmz2\" (UID: \"5ad7a906-bb60-46a4-9cd1-edcdbc3db91d\") " pod="openstack/nova-cell0-db-create-msmz2" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.371032 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dpcdv"] Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.371264 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dpcdv" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.447796 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxj98\" (UniqueName: \"kubernetes.io/projected/5ad7a906-bb60-46a4-9cd1-edcdbc3db91d-kube-api-access-mxj98\") pod \"nova-cell0-db-create-msmz2\" (UID: \"5ad7a906-bb60-46a4-9cd1-edcdbc3db91d\") " pod="openstack/nova-cell0-db-create-msmz2" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.467717 4895 generic.go:334] "Generic (PLEG): container finished" podID="f893b3e3-3833-4a94-ab55-951f600fdadd" containerID="8a2ecd7e5c32674941488ecb525ee418a131401934f9bf38507231590b7030d3" exitCode=0 Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.467889 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f893b3e3-3833-4a94-ab55-951f600fdadd","Type":"ContainerDied","Data":"8a2ecd7e5c32674941488ecb525ee418a131401934f9bf38507231590b7030d3"} Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.516939 4895 generic.go:334] "Generic (PLEG): container finished" podID="b56c025d-f59c-402b-8ad8-072e78d3b776" containerID="6340fa4e8500894fe5fdd4e8727a9c964dde4935e0e16ed68276fec030e46b14" exitCode=143 Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.518625 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b56c025d-f59c-402b-8ad8-072e78d3b776","Type":"ContainerDied","Data":"6340fa4e8500894fe5fdd4e8727a9c964dde4935e0e16ed68276fec030e46b14"} Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.519276 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-msmz2" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.566792 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wf57\" (UniqueName: \"kubernetes.io/projected/81129832-d241-4127-b30b-9a54a350d12f-kube-api-access-7wf57\") pod \"nova-cell1-db-create-dpcdv\" (UID: \"81129832-d241-4127-b30b-9a54a350d12f\") " pod="openstack/nova-cell1-db-create-dpcdv" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.566884 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81129832-d241-4127-b30b-9a54a350d12f-operator-scripts\") pod \"nova-cell1-db-create-dpcdv\" (UID: \"81129832-d241-4127-b30b-9a54a350d12f\") " pod="openstack/nova-cell1-db-create-dpcdv" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.669449 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wf57\" (UniqueName: \"kubernetes.io/projected/81129832-d241-4127-b30b-9a54a350d12f-kube-api-access-7wf57\") pod \"nova-cell1-db-create-dpcdv\" (UID: \"81129832-d241-4127-b30b-9a54a350d12f\") " pod="openstack/nova-cell1-db-create-dpcdv" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.669551 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81129832-d241-4127-b30b-9a54a350d12f-operator-scripts\") pod \"nova-cell1-db-create-dpcdv\" (UID: \"81129832-d241-4127-b30b-9a54a350d12f\") " pod="openstack/nova-cell1-db-create-dpcdv" Jan 29 09:02:56 crc kubenswrapper[4895]: I0129 09:02:56.670871 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81129832-d241-4127-b30b-9a54a350d12f-operator-scripts\") pod \"nova-cell1-db-create-dpcdv\" (UID: \"81129832-d241-4127-b30b-9a54a350d12f\") " pod="openstack/nova-cell1-db-create-dpcdv" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.016570 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wf57\" (UniqueName: \"kubernetes.io/projected/81129832-d241-4127-b30b-9a54a350d12f-kube-api-access-7wf57\") pod \"nova-cell1-db-create-dpcdv\" (UID: \"81129832-d241-4127-b30b-9a54a350d12f\") " pod="openstack/nova-cell1-db-create-dpcdv" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.067357 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-432f-account-create-update-rqswk"] Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.070283 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-432f-account-create-update-rqswk" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.070677 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dpcdv" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.076089 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-432f-account-create-update-rqswk"] Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.077054 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.178777 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.213610 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4260798e-471a-4a37-8a59-e4c5842d7ea5-operator-scripts\") pod \"nova-api-432f-account-create-update-rqswk\" (UID: \"4260798e-471a-4a37-8a59-e4c5842d7ea5\") " pod="openstack/nova-api-432f-account-create-update-rqswk" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.213681 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpzh4\" (UniqueName: \"kubernetes.io/projected/4260798e-471a-4a37-8a59-e4c5842d7ea5-kube-api-access-tpzh4\") pod \"nova-api-432f-account-create-update-rqswk\" (UID: \"4260798e-471a-4a37-8a59-e4c5842d7ea5\") " pod="openstack/nova-api-432f-account-create-update-rqswk" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.316699 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4260798e-471a-4a37-8a59-e4c5842d7ea5-operator-scripts\") pod \"nova-api-432f-account-create-update-rqswk\" (UID: \"4260798e-471a-4a37-8a59-e4c5842d7ea5\") " pod="openstack/nova-api-432f-account-create-update-rqswk" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.316773 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpzh4\" (UniqueName: \"kubernetes.io/projected/4260798e-471a-4a37-8a59-e4c5842d7ea5-kube-api-access-tpzh4\") pod \"nova-api-432f-account-create-update-rqswk\" (UID: \"4260798e-471a-4a37-8a59-e4c5842d7ea5\") " pod="openstack/nova-api-432f-account-create-update-rqswk" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.317811 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4260798e-471a-4a37-8a59-e4c5842d7ea5-operator-scripts\") pod \"nova-api-432f-account-create-update-rqswk\" (UID: \"4260798e-471a-4a37-8a59-e4c5842d7ea5\") " pod="openstack/nova-api-432f-account-create-update-rqswk" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.351945 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpzh4\" (UniqueName: \"kubernetes.io/projected/4260798e-471a-4a37-8a59-e4c5842d7ea5-kube-api-access-tpzh4\") pod \"nova-api-432f-account-create-update-rqswk\" (UID: \"4260798e-471a-4a37-8a59-e4c5842d7ea5\") " pod="openstack/nova-api-432f-account-create-update-rqswk" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.413468 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-432f-account-create-update-rqswk" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.570889 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b5c0-account-create-update-lmlgf"] Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.572934 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b5c0-account-create-update-lmlgf" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.576853 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.585720 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b5c0-account-create-update-lmlgf"] Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.724790 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkvtw\" (UniqueName: \"kubernetes.io/projected/e22259fa-a96d-4509-9499-a569fe60a39c-kube-api-access-tkvtw\") pod \"nova-cell0-b5c0-account-create-update-lmlgf\" (UID: \"e22259fa-a96d-4509-9499-a569fe60a39c\") " pod="openstack/nova-cell0-b5c0-account-create-update-lmlgf" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.724870 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22259fa-a96d-4509-9499-a569fe60a39c-operator-scripts\") pod \"nova-cell0-b5c0-account-create-update-lmlgf\" (UID: \"e22259fa-a96d-4509-9499-a569fe60a39c\") " pod="openstack/nova-cell0-b5c0-account-create-update-lmlgf" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.773040 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-105a-account-create-update-s9xqr"] Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.774522 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-105a-account-create-update-s9xqr" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.784023 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.790668 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-105a-account-create-update-s9xqr"] Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.831012 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkvtw\" (UniqueName: \"kubernetes.io/projected/e22259fa-a96d-4509-9499-a569fe60a39c-kube-api-access-tkvtw\") pod \"nova-cell0-b5c0-account-create-update-lmlgf\" (UID: \"e22259fa-a96d-4509-9499-a569fe60a39c\") " pod="openstack/nova-cell0-b5c0-account-create-update-lmlgf" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.833133 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22259fa-a96d-4509-9499-a569fe60a39c-operator-scripts\") pod \"nova-cell0-b5c0-account-create-update-lmlgf\" (UID: \"e22259fa-a96d-4509-9499-a569fe60a39c\") " pod="openstack/nova-cell0-b5c0-account-create-update-lmlgf" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.834086 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22259fa-a96d-4509-9499-a569fe60a39c-operator-scripts\") pod \"nova-cell0-b5c0-account-create-update-lmlgf\" (UID: \"e22259fa-a96d-4509-9499-a569fe60a39c\") " pod="openstack/nova-cell0-b5c0-account-create-update-lmlgf" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.854162 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkvtw\" (UniqueName: \"kubernetes.io/projected/e22259fa-a96d-4509-9499-a569fe60a39c-kube-api-access-tkvtw\") pod \"nova-cell0-b5c0-account-create-update-lmlgf\" (UID: \"e22259fa-a96d-4509-9499-a569fe60a39c\") " pod="openstack/nova-cell0-b5c0-account-create-update-lmlgf" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.896178 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b5c0-account-create-update-lmlgf" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.935881 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xblzg\" (UniqueName: \"kubernetes.io/projected/9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439-kube-api-access-xblzg\") pod \"nova-cell1-105a-account-create-update-s9xqr\" (UID: \"9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439\") " pod="openstack/nova-cell1-105a-account-create-update-s9xqr" Jan 29 09:02:57 crc kubenswrapper[4895]: I0129 09:02:57.936043 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439-operator-scripts\") pod \"nova-cell1-105a-account-create-update-s9xqr\" (UID: \"9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439\") " pod="openstack/nova-cell1-105a-account-create-update-s9xqr" Jan 29 09:02:58 crc kubenswrapper[4895]: I0129 09:02:58.039277 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xblzg\" (UniqueName: \"kubernetes.io/projected/9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439-kube-api-access-xblzg\") pod \"nova-cell1-105a-account-create-update-s9xqr\" (UID: \"9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439\") " pod="openstack/nova-cell1-105a-account-create-update-s9xqr" Jan 29 09:02:58 crc kubenswrapper[4895]: I0129 09:02:58.039494 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439-operator-scripts\") pod \"nova-cell1-105a-account-create-update-s9xqr\" (UID: \"9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439\") " pod="openstack/nova-cell1-105a-account-create-update-s9xqr" Jan 29 09:02:58 crc kubenswrapper[4895]: I0129 09:02:58.040446 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439-operator-scripts\") pod \"nova-cell1-105a-account-create-update-s9xqr\" (UID: \"9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439\") " pod="openstack/nova-cell1-105a-account-create-update-s9xqr" Jan 29 09:02:58 crc kubenswrapper[4895]: I0129 09:02:58.065815 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xblzg\" (UniqueName: \"kubernetes.io/projected/9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439-kube-api-access-xblzg\") pod \"nova-cell1-105a-account-create-update-s9xqr\" (UID: \"9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439\") " pod="openstack/nova-cell1-105a-account-create-update-s9xqr" Jan 29 09:02:58 crc kubenswrapper[4895]: I0129 09:02:58.096035 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-105a-account-create-update-s9xqr" Jan 29 09:02:58 crc kubenswrapper[4895]: I0129 09:02:58.661612 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="b56c025d-f59c-402b-8ad8-072e78d3b776" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.149:9292/healthcheck\": dial tcp 10.217.0.149:9292: connect: connection refused" Jan 29 09:02:58 crc kubenswrapper[4895]: I0129 09:02:58.661678 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="b56c025d-f59c-402b-8ad8-072e78d3b776" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.149:9292/healthcheck\": dial tcp 10.217.0.149:9292: connect: connection refused" Jan 29 09:02:59 crc kubenswrapper[4895]: I0129 09:02:59.562478 4895 generic.go:334] "Generic (PLEG): container finished" podID="b56c025d-f59c-402b-8ad8-072e78d3b776" containerID="8e00c9b1edea6840545c4e3f417d54a872460f1668d288f2b62b3044b0eb35c5" exitCode=0 Jan 29 09:02:59 crc kubenswrapper[4895]: I0129 09:02:59.562588 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b56c025d-f59c-402b-8ad8-072e78d3b776","Type":"ContainerDied","Data":"8e00c9b1edea6840545c4e3f417d54a872460f1668d288f2b62b3044b0eb35c5"} Jan 29 09:03:00 crc kubenswrapper[4895]: I0129 09:03:00.940453 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 09:03:00 crc kubenswrapper[4895]: I0129 09:03:00.941585 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.068125 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.171992 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b56c025d-f59c-402b-8ad8-072e78d3b776-logs\") pod \"b56c025d-f59c-402b-8ad8-072e78d3b776\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.172071 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-config-data\") pod \"b56c025d-f59c-402b-8ad8-072e78d3b776\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.172225 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-scripts\") pod \"b56c025d-f59c-402b-8ad8-072e78d3b776\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.172675 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b56c025d-f59c-402b-8ad8-072e78d3b776-httpd-run\") pod \"b56c025d-f59c-402b-8ad8-072e78d3b776\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.172908 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-internal-tls-certs\") pod \"b56c025d-f59c-402b-8ad8-072e78d3b776\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.173250 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"b56c025d-f59c-402b-8ad8-072e78d3b776\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.173319 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-combined-ca-bundle\") pod \"b56c025d-f59c-402b-8ad8-072e78d3b776\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.174175 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdv2s\" (UniqueName: \"kubernetes.io/projected/b56c025d-f59c-402b-8ad8-072e78d3b776-kube-api-access-sdv2s\") pod \"b56c025d-f59c-402b-8ad8-072e78d3b776\" (UID: \"b56c025d-f59c-402b-8ad8-072e78d3b776\") " Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.181620 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "b56c025d-f59c-402b-8ad8-072e78d3b776" (UID: "b56c025d-f59c-402b-8ad8-072e78d3b776"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.181841 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-scripts" (OuterVolumeSpecName: "scripts") pod "b56c025d-f59c-402b-8ad8-072e78d3b776" (UID: "b56c025d-f59c-402b-8ad8-072e78d3b776"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.182187 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b56c025d-f59c-402b-8ad8-072e78d3b776-logs" (OuterVolumeSpecName: "logs") pod "b56c025d-f59c-402b-8ad8-072e78d3b776" (UID: "b56c025d-f59c-402b-8ad8-072e78d3b776"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.182891 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b56c025d-f59c-402b-8ad8-072e78d3b776-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b56c025d-f59c-402b-8ad8-072e78d3b776" (UID: "b56c025d-f59c-402b-8ad8-072e78d3b776"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.205880 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.234130 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b56c025d-f59c-402b-8ad8-072e78d3b776-kube-api-access-sdv2s" (OuterVolumeSpecName: "kube-api-access-sdv2s") pod "b56c025d-f59c-402b-8ad8-072e78d3b776" (UID: "b56c025d-f59c-402b-8ad8-072e78d3b776"). InnerVolumeSpecName "kube-api-access-sdv2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.287778 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.287812 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdv2s\" (UniqueName: \"kubernetes.io/projected/b56c025d-f59c-402b-8ad8-072e78d3b776-kube-api-access-sdv2s\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.287827 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b56c025d-f59c-402b-8ad8-072e78d3b776-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.287836 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.287848 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b56c025d-f59c-402b-8ad8-072e78d3b776-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.303058 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b56c025d-f59c-402b-8ad8-072e78d3b776" (UID: "b56c025d-f59c-402b-8ad8-072e78d3b776"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.340364 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.383873 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-config-data" (OuterVolumeSpecName: "config-data") pod "b56c025d-f59c-402b-8ad8-072e78d3b776" (UID: "b56c025d-f59c-402b-8ad8-072e78d3b776"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.390640 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.390682 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.390692 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.414364 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b56c025d-f59c-402b-8ad8-072e78d3b776" (UID: "b56c025d-f59c-402b-8ad8-072e78d3b776"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.486000 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-432f-account-create-update-rqswk"] Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.486065 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-msmz2"] Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.486261 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.486298 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.493020 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b56c025d-f59c-402b-8ad8-072e78d3b776-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.665422 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dpcdv"] Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.682624 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-105a-account-create-update-s9xqr"] Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.695455 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.708411 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fnzdv"] Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.725761 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.729994 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-msmz2" event={"ID":"5ad7a906-bb60-46a4-9cd1-edcdbc3db91d","Type":"ContainerStarted","Data":"6e3b2cdb4f1e89c2ef2329273b48247230f4d6db17d27e69881933ffd5a172f4"} Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.743498 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-432f-account-create-update-rqswk" event={"ID":"4260798e-471a-4a37-8a59-e4c5842d7ea5","Type":"ContainerStarted","Data":"eb73eec8c6b81f98f1665687f402bb18e206dff1aab03ae9a60e1678c4329429"} Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.747696 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b5c0-account-create-update-lmlgf"] Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.760715 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3048dd8-2192-435c-a25f-8823906061ac","Type":"ContainerStarted","Data":"dd4a9e73d8b336493ad063e592ca33f69b8f5458d6749886a2434a6405b62b5c"} Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.761149 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="ceilometer-central-agent" containerID="cri-o://8b803d19f0b22a5d8bfed61897b8d03efc523962f712390e93f6205606aa9697" gracePeriod=30 Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.761459 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.762030 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="proxy-httpd" containerID="cri-o://dd4a9e73d8b336493ad063e592ca33f69b8f5458d6749886a2434a6405b62b5c" gracePeriod=30 Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.762318 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="sg-core" containerID="cri-o://241f0b80a30c71297e463e6c07f71a2b4b1e52ed8a4ad6ffb7ebc800dfdb4745" gracePeriod=30 Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.762422 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="ceilometer-notification-agent" containerID="cri-o://198155e5d6b3ab05adf4ec1b2deda980a6bc062a8baa4401f53a16fdffbfa3cc" gracePeriod=30 Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.800188 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f10bf685-c7de-4126-afc5-6bd68c3e8845","Type":"ContainerStarted","Data":"fcabcb04160f9da9abbaebb3d4020979f82fdaae9a407e768a86c3259cbd1f21"} Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.810289 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.812865 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b56c025d-f59c-402b-8ad8-072e78d3b776","Type":"ContainerDied","Data":"93dd0ad8d3b7a6ee99134429c00ed1df5c308d0e4f92198ece399d1ed1cb1b24"} Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.813229 4895 scope.go:117] "RemoveContainer" containerID="8e00c9b1edea6840545c4e3f417d54a872460f1668d288f2b62b3044b0eb35c5" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.814720 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.814801 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.888680 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.592904421 podStartE2EDuration="14.888635955s" podCreationTimestamp="2026-01-29 09:02:47 +0000 UTC" firstStartedPulling="2026-01-29 09:02:47.997400037 +0000 UTC m=+1309.638908183" lastFinishedPulling="2026-01-29 09:03:00.293131571 +0000 UTC m=+1321.934639717" observedRunningTime="2026-01-29 09:03:01.813743987 +0000 UTC m=+1323.455252133" watchObservedRunningTime="2026-01-29 09:03:01.888635955 +0000 UTC m=+1323.530144101" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.894323 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.973423839 podStartE2EDuration="46.894289616s" podCreationTimestamp="2026-01-29 09:02:15 +0000 UTC" firstStartedPulling="2026-01-29 09:02:18.378979014 +0000 UTC m=+1280.020487160" lastFinishedPulling="2026-01-29 09:03:00.299844791 +0000 UTC m=+1321.941352937" observedRunningTime="2026-01-29 09:03:01.836623271 +0000 UTC m=+1323.478131427" watchObservedRunningTime="2026-01-29 09:03:01.894289616 +0000 UTC m=+1323.535797762" Jan 29 09:03:01 crc kubenswrapper[4895]: I0129 09:03:01.985595 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.000565 4895 scope.go:117] "RemoveContainer" containerID="6340fa4e8500894fe5fdd4e8727a9c964dde4935e0e16ed68276fec030e46b14" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.045685 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.064900 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:03:02 crc kubenswrapper[4895]: E0129 09:03:02.072921 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b56c025d-f59c-402b-8ad8-072e78d3b776" containerName="glance-httpd" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.072973 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b56c025d-f59c-402b-8ad8-072e78d3b776" containerName="glance-httpd" Jan 29 09:03:02 crc kubenswrapper[4895]: E0129 09:03:02.072994 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b56c025d-f59c-402b-8ad8-072e78d3b776" containerName="glance-log" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.073001 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b56c025d-f59c-402b-8ad8-072e78d3b776" containerName="glance-log" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.073195 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b56c025d-f59c-402b-8ad8-072e78d3b776" containerName="glance-httpd" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.073222 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b56c025d-f59c-402b-8ad8-072e78d3b776" containerName="glance-log" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.074543 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.077526 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.079562 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.079814 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.252046 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbcc1d5c-0822-492b-98ce-667e0f13d497-logs\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.252609 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.252690 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbcc1d5c-0822-492b-98ce-667e0f13d497-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.252794 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcc1d5c-0822-492b-98ce-667e0f13d497-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.252881 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcc1d5c-0822-492b-98ce-667e0f13d497-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.252953 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbcc1d5c-0822-492b-98ce-667e0f13d497-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.252996 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8ldn\" (UniqueName: \"kubernetes.io/projected/dbcc1d5c-0822-492b-98ce-667e0f13d497-kube-api-access-b8ldn\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.253023 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcc1d5c-0822-492b-98ce-667e0f13d497-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.355278 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbcc1d5c-0822-492b-98ce-667e0f13d497-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.355733 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8ldn\" (UniqueName: \"kubernetes.io/projected/dbcc1d5c-0822-492b-98ce-667e0f13d497-kube-api-access-b8ldn\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.355852 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcc1d5c-0822-492b-98ce-667e0f13d497-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.356084 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbcc1d5c-0822-492b-98ce-667e0f13d497-logs\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.356179 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbcc1d5c-0822-492b-98ce-667e0f13d497-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.356382 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.356629 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbcc1d5c-0822-492b-98ce-667e0f13d497-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.356821 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcc1d5c-0822-492b-98ce-667e0f13d497-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.357151 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcc1d5c-0822-492b-98ce-667e0f13d497-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.357246 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.360266 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbcc1d5c-0822-492b-98ce-667e0f13d497-logs\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.366222 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbcc1d5c-0822-492b-98ce-667e0f13d497-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.367525 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcc1d5c-0822-492b-98ce-667e0f13d497-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.368034 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcc1d5c-0822-492b-98ce-667e0f13d497-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.376856 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcc1d5c-0822-492b-98ce-667e0f13d497-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.387794 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8ldn\" (UniqueName: \"kubernetes.io/projected/dbcc1d5c-0822-492b-98ce-667e0f13d497-kube-api-access-b8ldn\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.400363 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"dbcc1d5c-0822-492b-98ce-667e0f13d497\") " pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.601311 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.857848 4895 generic.go:334] "Generic (PLEG): container finished" podID="4260798e-471a-4a37-8a59-e4c5842d7ea5" containerID="9143e449f62e512aee0d34bc04457738bff8166802e1eb4169bb13bc5b0877d5" exitCode=0 Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.858439 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-432f-account-create-update-rqswk" event={"ID":"4260798e-471a-4a37-8a59-e4c5842d7ea5","Type":"ContainerDied","Data":"9143e449f62e512aee0d34bc04457738bff8166802e1eb4169bb13bc5b0877d5"} Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.880466 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dpcdv" event={"ID":"81129832-d241-4127-b30b-9a54a350d12f","Type":"ContainerStarted","Data":"b9d6a8a96ce1daf49ebc8ffe0d94bbdf073a017041a716d750c475cf3d4eac83"} Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.880892 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dpcdv" event={"ID":"81129832-d241-4127-b30b-9a54a350d12f","Type":"ContainerStarted","Data":"ccee97bf6829509fa1885a80984118d8a36a1927240547ace745bf4d44488f0d"} Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.918462 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b5c0-account-create-update-lmlgf" event={"ID":"e22259fa-a96d-4509-9499-a569fe60a39c","Type":"ContainerStarted","Data":"6e84bd6cbfa33766bafc717271dc88f08c8bc980bda3a8336b1951cd99c59ddc"} Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.946531 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-dpcdv" podStartSLOduration=6.946474627 podStartE2EDuration="6.946474627s" podCreationTimestamp="2026-01-29 09:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:03:02.936372386 +0000 UTC m=+1324.577880532" watchObservedRunningTime="2026-01-29 09:03:02.946474627 +0000 UTC m=+1324.587982773" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.952858 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fnzdv" event={"ID":"fc395a7a-25bb-46cd-89b0-9bbd5b1431f7","Type":"ContainerStarted","Data":"756b7875e145334e41f8becfb56df5e5a6a010a60761bd1f75c3b91cd41f1568"} Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.983848 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-b5c0-account-create-update-lmlgf" podStartSLOduration=5.983824608 podStartE2EDuration="5.983824608s" podCreationTimestamp="2026-01-29 09:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:03:02.972604417 +0000 UTC m=+1324.614112563" watchObservedRunningTime="2026-01-29 09:03:02.983824608 +0000 UTC m=+1324.625332754" Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.987394 4895 generic.go:334] "Generic (PLEG): container finished" podID="a3048dd8-2192-435c-a25f-8823906061ac" containerID="241f0b80a30c71297e463e6c07f71a2b4b1e52ed8a4ad6ffb7ebc800dfdb4745" exitCode=2 Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.987437 4895 generic.go:334] "Generic (PLEG): container finished" podID="a3048dd8-2192-435c-a25f-8823906061ac" containerID="198155e5d6b3ab05adf4ec1b2deda980a6bc062a8baa4401f53a16fdffbfa3cc" exitCode=0 Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.987539 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3048dd8-2192-435c-a25f-8823906061ac","Type":"ContainerDied","Data":"241f0b80a30c71297e463e6c07f71a2b4b1e52ed8a4ad6ffb7ebc800dfdb4745"} Jan 29 09:03:02 crc kubenswrapper[4895]: I0129 09:03:02.987573 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3048dd8-2192-435c-a25f-8823906061ac","Type":"ContainerDied","Data":"198155e5d6b3ab05adf4ec1b2deda980a6bc062a8baa4401f53a16fdffbfa3cc"} Jan 29 09:03:03 crc kubenswrapper[4895]: I0129 09:03:03.021211 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-fnzdv" podStartSLOduration=8.0211744 podStartE2EDuration="8.0211744s" podCreationTimestamp="2026-01-29 09:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:03:03.006630469 +0000 UTC m=+1324.648138615" watchObservedRunningTime="2026-01-29 09:03:03.0211744 +0000 UTC m=+1324.662682546" Jan 29 09:03:03 crc kubenswrapper[4895]: I0129 09:03:03.025662 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-105a-account-create-update-s9xqr" event={"ID":"9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439","Type":"ContainerStarted","Data":"fe5d920ec47cc492fd6670b9cf673448c614005411c95f1dd8cc8bc44c92559b"} Jan 29 09:03:03 crc kubenswrapper[4895]: I0129 09:03:03.053291 4895 generic.go:334] "Generic (PLEG): container finished" podID="5ad7a906-bb60-46a4-9cd1-edcdbc3db91d" containerID="02495fe07f98fa2a7ead86a5f12c2939280a050dd3fea0e2965d0ea059f19240" exitCode=0 Jan 29 09:03:03 crc kubenswrapper[4895]: I0129 09:03:03.053421 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-msmz2" event={"ID":"5ad7a906-bb60-46a4-9cd1-edcdbc3db91d","Type":"ContainerDied","Data":"02495fe07f98fa2a7ead86a5f12c2939280a050dd3fea0e2965d0ea059f19240"} Jan 29 09:03:03 crc kubenswrapper[4895]: I0129 09:03:03.072359 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-105a-account-create-update-s9xqr" podStartSLOduration=6.07233524 podStartE2EDuration="6.07233524s" podCreationTimestamp="2026-01-29 09:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:03:03.066030462 +0000 UTC m=+1324.707538608" watchObservedRunningTime="2026-01-29 09:03:03.07233524 +0000 UTC m=+1324.713843386" Jan 29 09:03:03 crc kubenswrapper[4895]: I0129 09:03:03.080833 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"42c2646f-8f1b-4357-8f80-b339103c8d5d","Type":"ContainerStarted","Data":"a17286ad01edbd1c4b2e3b058155503bc05e4a3301fa646bcb093a3d88d97d62"} Jan 29 09:03:03 crc kubenswrapper[4895]: E0129 09:03:03.235400 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42c2646f_8f1b_4357_8f80_b339103c8d5d.slice/crio-conmon-a17286ad01edbd1c4b2e3b058155503bc05e4a3301fa646bcb093a3d88d97d62.scope\": RecentStats: unable to find data in memory cache]" Jan 29 09:03:03 crc kubenswrapper[4895]: I0129 09:03:03.243313 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b56c025d-f59c-402b-8ad8-072e78d3b776" path="/var/lib/kubelet/pods/b56c025d-f59c-402b-8ad8-072e78d3b776/volumes" Jan 29 09:03:03 crc kubenswrapper[4895]: I0129 09:03:03.805453 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 09:03:03 crc kubenswrapper[4895]: I0129 09:03:03.996501 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.086800 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-combined-ca-bundle\") pod \"42c2646f-8f1b-4357-8f80-b339103c8d5d\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.087275 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/42c2646f-8f1b-4357-8f80-b339103c8d5d-var-lib-ironic\") pod \"42c2646f-8f1b-4357-8f80-b339103c8d5d\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.087344 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/42c2646f-8f1b-4357-8f80-b339103c8d5d-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"42c2646f-8f1b-4357-8f80-b339103c8d5d\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.087370 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvs5f\" (UniqueName: \"kubernetes.io/projected/42c2646f-8f1b-4357-8f80-b339103c8d5d-kube-api-access-qvs5f\") pod \"42c2646f-8f1b-4357-8f80-b339103c8d5d\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.087443 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/42c2646f-8f1b-4357-8f80-b339103c8d5d-etc-podinfo\") pod \"42c2646f-8f1b-4357-8f80-b339103c8d5d\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.087524 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-config\") pod \"42c2646f-8f1b-4357-8f80-b339103c8d5d\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.087555 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-scripts\") pod \"42c2646f-8f1b-4357-8f80-b339103c8d5d\" (UID: \"42c2646f-8f1b-4357-8f80-b339103c8d5d\") " Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.095206 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c2646f-8f1b-4357-8f80-b339103c8d5d-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "42c2646f-8f1b-4357-8f80-b339103c8d5d" (UID: "42c2646f-8f1b-4357-8f80-b339103c8d5d"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.107388 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c2646f-8f1b-4357-8f80-b339103c8d5d-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "42c2646f-8f1b-4357-8f80-b339103c8d5d" (UID: "42c2646f-8f1b-4357-8f80-b339103c8d5d"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.107805 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-config" (OuterVolumeSpecName: "config") pod "42c2646f-8f1b-4357-8f80-b339103c8d5d" (UID: "42c2646f-8f1b-4357-8f80-b339103c8d5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.120202 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/42c2646f-8f1b-4357-8f80-b339103c8d5d-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "42c2646f-8f1b-4357-8f80-b339103c8d5d" (UID: "42c2646f-8f1b-4357-8f80-b339103c8d5d"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.120827 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-scripts" (OuterVolumeSpecName: "scripts") pod "42c2646f-8f1b-4357-8f80-b339103c8d5d" (UID: "42c2646f-8f1b-4357-8f80-b339103c8d5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.122268 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dbcc1d5c-0822-492b-98ce-667e0f13d497","Type":"ContainerStarted","Data":"169d797fbef7155365646e5bc142a3959458873a2f30590349f217a93addd171"} Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.129656 4895 generic.go:334] "Generic (PLEG): container finished" podID="9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439" containerID="4bed6288328fdfe5e05f6ff06a39266f1340688f518753012e8d13ede234895d" exitCode=0 Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.129757 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-105a-account-create-update-s9xqr" event={"ID":"9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439","Type":"ContainerDied","Data":"4bed6288328fdfe5e05f6ff06a39266f1340688f518753012e8d13ede234895d"} Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.151318 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c2646f-8f1b-4357-8f80-b339103c8d5d-kube-api-access-qvs5f" (OuterVolumeSpecName: "kube-api-access-qvs5f") pod "42c2646f-8f1b-4357-8f80-b339103c8d5d" (UID: "42c2646f-8f1b-4357-8f80-b339103c8d5d"). InnerVolumeSpecName "kube-api-access-qvs5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.163424 4895 generic.go:334] "Generic (PLEG): container finished" podID="e22259fa-a96d-4509-9499-a569fe60a39c" containerID="a82751c3f05e7e05493ed5f8f7947f34c28b351582ed21bb38257a319ec0ecaf" exitCode=0 Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.163555 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b5c0-account-create-update-lmlgf" event={"ID":"e22259fa-a96d-4509-9499-a569fe60a39c","Type":"ContainerDied","Data":"a82751c3f05e7e05493ed5f8f7947f34c28b351582ed21bb38257a319ec0ecaf"} Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.193664 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvs5f\" (UniqueName: \"kubernetes.io/projected/42c2646f-8f1b-4357-8f80-b339103c8d5d-kube-api-access-qvs5f\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.193710 4895 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/42c2646f-8f1b-4357-8f80-b339103c8d5d-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.193725 4895 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/42c2646f-8f1b-4357-8f80-b339103c8d5d-etc-podinfo\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.193739 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.193750 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.193759 4895 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/42c2646f-8f1b-4357-8f80-b339103c8d5d-var-lib-ironic\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.199388 4895 generic.go:334] "Generic (PLEG): container finished" podID="81129832-d241-4127-b30b-9a54a350d12f" containerID="b9d6a8a96ce1daf49ebc8ffe0d94bbdf073a017041a716d750c475cf3d4eac83" exitCode=0 Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.199553 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dpcdv" event={"ID":"81129832-d241-4127-b30b-9a54a350d12f","Type":"ContainerDied","Data":"b9d6a8a96ce1daf49ebc8ffe0d94bbdf073a017041a716d750c475cf3d4eac83"} Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.219352 4895 generic.go:334] "Generic (PLEG): container finished" podID="42c2646f-8f1b-4357-8f80-b339103c8d5d" containerID="a17286ad01edbd1c4b2e3b058155503bc05e4a3301fa646bcb093a3d88d97d62" exitCode=0 Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.219445 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"42c2646f-8f1b-4357-8f80-b339103c8d5d","Type":"ContainerDied","Data":"a17286ad01edbd1c4b2e3b058155503bc05e4a3301fa646bcb093a3d88d97d62"} Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.219483 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"42c2646f-8f1b-4357-8f80-b339103c8d5d","Type":"ContainerDied","Data":"1b1aa54ad4574507f0d8069bcb3bdb63304c1ef61e0ca0822e6be0b1a329fdc0"} Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.219503 4895 scope.go:117] "RemoveContainer" containerID="a17286ad01edbd1c4b2e3b058155503bc05e4a3301fa646bcb093a3d88d97d62" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.219702 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.237389 4895 generic.go:334] "Generic (PLEG): container finished" podID="fc395a7a-25bb-46cd-89b0-9bbd5b1431f7" containerID="0cc5ce15347f94bf6758485234d0fe21f2a9c878e0068774924b85d44d094c25" exitCode=0 Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.237536 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fnzdv" event={"ID":"fc395a7a-25bb-46cd-89b0-9bbd5b1431f7","Type":"ContainerDied","Data":"0cc5ce15347f94bf6758485234d0fe21f2a9c878e0068774924b85d44d094c25"} Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.243382 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f893b3e3-3833-4a94-ab55-951f600fdadd","Type":"ContainerStarted","Data":"f3880d5648bd6779f6ac04a3d3c5267cf390a0ee9a3b7842f23a494ddce96f89"} Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.360140 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42c2646f-8f1b-4357-8f80-b339103c8d5d" (UID: "42c2646f-8f1b-4357-8f80-b339103c8d5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.410106 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c2646f-8f1b-4357-8f80-b339103c8d5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.470212 4895 scope.go:117] "RemoveContainer" containerID="645317a1718bad3ffc7de1be3da22b9de6d77dbf83d4da91dff5a115ddead8cd" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.552365 4895 scope.go:117] "RemoveContainer" containerID="a17286ad01edbd1c4b2e3b058155503bc05e4a3301fa646bcb093a3d88d97d62" Jan 29 09:03:04 crc kubenswrapper[4895]: E0129 09:03:04.560418 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a17286ad01edbd1c4b2e3b058155503bc05e4a3301fa646bcb093a3d88d97d62\": container with ID starting with a17286ad01edbd1c4b2e3b058155503bc05e4a3301fa646bcb093a3d88d97d62 not found: ID does not exist" containerID="a17286ad01edbd1c4b2e3b058155503bc05e4a3301fa646bcb093a3d88d97d62" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.560485 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a17286ad01edbd1c4b2e3b058155503bc05e4a3301fa646bcb093a3d88d97d62"} err="failed to get container status \"a17286ad01edbd1c4b2e3b058155503bc05e4a3301fa646bcb093a3d88d97d62\": rpc error: code = NotFound desc = could not find container \"a17286ad01edbd1c4b2e3b058155503bc05e4a3301fa646bcb093a3d88d97d62\": container with ID starting with a17286ad01edbd1c4b2e3b058155503bc05e4a3301fa646bcb093a3d88d97d62 not found: ID does not exist" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.560523 4895 scope.go:117] "RemoveContainer" containerID="645317a1718bad3ffc7de1be3da22b9de6d77dbf83d4da91dff5a115ddead8cd" Jan 29 09:03:04 crc kubenswrapper[4895]: E0129 09:03:04.561634 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"645317a1718bad3ffc7de1be3da22b9de6d77dbf83d4da91dff5a115ddead8cd\": container with ID starting with 645317a1718bad3ffc7de1be3da22b9de6d77dbf83d4da91dff5a115ddead8cd not found: ID does not exist" containerID="645317a1718bad3ffc7de1be3da22b9de6d77dbf83d4da91dff5a115ddead8cd" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.561703 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"645317a1718bad3ffc7de1be3da22b9de6d77dbf83d4da91dff5a115ddead8cd"} err="failed to get container status \"645317a1718bad3ffc7de1be3da22b9de6d77dbf83d4da91dff5a115ddead8cd\": rpc error: code = NotFound desc = could not find container \"645317a1718bad3ffc7de1be3da22b9de6d77dbf83d4da91dff5a115ddead8cd\": container with ID starting with 645317a1718bad3ffc7de1be3da22b9de6d77dbf83d4da91dff5a115ddead8cd not found: ID does not exist" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.665622 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.687961 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.734022 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Jan 29 09:03:04 crc kubenswrapper[4895]: E0129 09:03:04.734859 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c2646f-8f1b-4357-8f80-b339103c8d5d" containerName="inspector-pxe-init" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.734882 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c2646f-8f1b-4357-8f80-b339103c8d5d" containerName="inspector-pxe-init" Jan 29 09:03:04 crc kubenswrapper[4895]: E0129 09:03:04.734905 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c2646f-8f1b-4357-8f80-b339103c8d5d" containerName="ironic-python-agent-init" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.734914 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c2646f-8f1b-4357-8f80-b339103c8d5d" containerName="ironic-python-agent-init" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.735221 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c2646f-8f1b-4357-8f80-b339103c8d5d" containerName="inspector-pxe-init" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.742119 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.743731 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.748804 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.749204 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.749339 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.752124 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.825122 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.825195 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.825231 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxlm8\" (UniqueName: \"kubernetes.io/projected/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-kube-api-access-kxlm8\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.825261 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.825306 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-scripts\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.825336 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-config\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.825444 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.825476 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.825498 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.917399 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-432f-account-create-update-rqswk" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.928480 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.928878 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.929026 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.929162 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.929284 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.929397 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxlm8\" (UniqueName: \"kubernetes.io/projected/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-kube-api-access-kxlm8\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.929446 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.929546 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.929642 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-scripts\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.929729 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-config\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.930595 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.938345 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-config\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.940523 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.942255 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.944218 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.951233 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.953235 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-scripts\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.965873 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxlm8\" (UniqueName: \"kubernetes.io/projected/9a9f2123-8dc5-46d6-81ae-802f6e92c3a8-kube-api-access-kxlm8\") pod \"ironic-inspector-0\" (UID: \"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8\") " pod="openstack/ironic-inspector-0" Jan 29 09:03:04 crc kubenswrapper[4895]: I0129 09:03:04.989255 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-msmz2" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.032693 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4260798e-471a-4a37-8a59-e4c5842d7ea5-operator-scripts\") pod \"4260798e-471a-4a37-8a59-e4c5842d7ea5\" (UID: \"4260798e-471a-4a37-8a59-e4c5842d7ea5\") " Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.033026 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpzh4\" (UniqueName: \"kubernetes.io/projected/4260798e-471a-4a37-8a59-e4c5842d7ea5-kube-api-access-tpzh4\") pod \"4260798e-471a-4a37-8a59-e4c5842d7ea5\" (UID: \"4260798e-471a-4a37-8a59-e4c5842d7ea5\") " Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.035170 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4260798e-471a-4a37-8a59-e4c5842d7ea5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4260798e-471a-4a37-8a59-e4c5842d7ea5" (UID: "4260798e-471a-4a37-8a59-e4c5842d7ea5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.041844 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4260798e-471a-4a37-8a59-e4c5842d7ea5-kube-api-access-tpzh4" (OuterVolumeSpecName: "kube-api-access-tpzh4") pod "4260798e-471a-4a37-8a59-e4c5842d7ea5" (UID: "4260798e-471a-4a37-8a59-e4c5842d7ea5"). InnerVolumeSpecName "kube-api-access-tpzh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.086797 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.135208 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad7a906-bb60-46a4-9cd1-edcdbc3db91d-operator-scripts\") pod \"5ad7a906-bb60-46a4-9cd1-edcdbc3db91d\" (UID: \"5ad7a906-bb60-46a4-9cd1-edcdbc3db91d\") " Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.135486 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxj98\" (UniqueName: \"kubernetes.io/projected/5ad7a906-bb60-46a4-9cd1-edcdbc3db91d-kube-api-access-mxj98\") pod \"5ad7a906-bb60-46a4-9cd1-edcdbc3db91d\" (UID: \"5ad7a906-bb60-46a4-9cd1-edcdbc3db91d\") " Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.136381 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4260798e-471a-4a37-8a59-e4c5842d7ea5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.136629 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpzh4\" (UniqueName: \"kubernetes.io/projected/4260798e-471a-4a37-8a59-e4c5842d7ea5-kube-api-access-tpzh4\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.137481 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad7a906-bb60-46a4-9cd1-edcdbc3db91d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ad7a906-bb60-46a4-9cd1-edcdbc3db91d" (UID: "5ad7a906-bb60-46a4-9cd1-edcdbc3db91d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.151166 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad7a906-bb60-46a4-9cd1-edcdbc3db91d-kube-api-access-mxj98" (OuterVolumeSpecName: "kube-api-access-mxj98") pod "5ad7a906-bb60-46a4-9cd1-edcdbc3db91d" (UID: "5ad7a906-bb60-46a4-9cd1-edcdbc3db91d"). InnerVolumeSpecName "kube-api-access-mxj98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.240091 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxj98\" (UniqueName: \"kubernetes.io/projected/5ad7a906-bb60-46a4-9cd1-edcdbc3db91d-kube-api-access-mxj98\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.240131 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad7a906-bb60-46a4-9cd1-edcdbc3db91d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.245813 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c2646f-8f1b-4357-8f80-b339103c8d5d" path="/var/lib/kubelet/pods/42c2646f-8f1b-4357-8f80-b339103c8d5d/volumes" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.358205 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-432f-account-create-update-rqswk" event={"ID":"4260798e-471a-4a37-8a59-e4c5842d7ea5","Type":"ContainerDied","Data":"eb73eec8c6b81f98f1665687f402bb18e206dff1aab03ae9a60e1678c4329429"} Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.358597 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb73eec8c6b81f98f1665687f402bb18e206dff1aab03ae9a60e1678c4329429" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.358686 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-432f-account-create-update-rqswk" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.372138 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dbcc1d5c-0822-492b-98ce-667e0f13d497","Type":"ContainerStarted","Data":"3d353f36949144f3ac2e301b79ad4b01d41b94b34a16fae13b38bde4b70b6207"} Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.375953 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-msmz2" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.376427 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-msmz2" event={"ID":"5ad7a906-bb60-46a4-9cd1-edcdbc3db91d","Type":"ContainerDied","Data":"6e3b2cdb4f1e89c2ef2329273b48247230f4d6db17d27e69881933ffd5a172f4"} Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.376444 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e3b2cdb4f1e89c2ef2329273b48247230f4d6db17d27e69881933ffd5a172f4" Jan 29 09:03:05 crc kubenswrapper[4895]: I0129 09:03:05.990039 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.360368 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.361012 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.375000 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.427682 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8","Type":"ContainerStarted","Data":"e9080dc213bb200c82809a8d231c3a479195c7748899bbe2ea704b4a4745dd26"} Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.569701 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fnzdv" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.592869 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dpcdv" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.608681 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b5c0-account-create-update-lmlgf" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.701148 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-105a-account-create-update-s9xqr" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.746124 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wf57\" (UniqueName: \"kubernetes.io/projected/81129832-d241-4127-b30b-9a54a350d12f-kube-api-access-7wf57\") pod \"81129832-d241-4127-b30b-9a54a350d12f\" (UID: \"81129832-d241-4127-b30b-9a54a350d12f\") " Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.746596 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txgbx\" (UniqueName: \"kubernetes.io/projected/fc395a7a-25bb-46cd-89b0-9bbd5b1431f7-kube-api-access-txgbx\") pod \"fc395a7a-25bb-46cd-89b0-9bbd5b1431f7\" (UID: \"fc395a7a-25bb-46cd-89b0-9bbd5b1431f7\") " Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.746801 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkvtw\" (UniqueName: \"kubernetes.io/projected/e22259fa-a96d-4509-9499-a569fe60a39c-kube-api-access-tkvtw\") pod \"e22259fa-a96d-4509-9499-a569fe60a39c\" (UID: \"e22259fa-a96d-4509-9499-a569fe60a39c\") " Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.747183 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc395a7a-25bb-46cd-89b0-9bbd5b1431f7-operator-scripts\") pod \"fc395a7a-25bb-46cd-89b0-9bbd5b1431f7\" (UID: \"fc395a7a-25bb-46cd-89b0-9bbd5b1431f7\") " Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.747333 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81129832-d241-4127-b30b-9a54a350d12f-operator-scripts\") pod \"81129832-d241-4127-b30b-9a54a350d12f\" (UID: \"81129832-d241-4127-b30b-9a54a350d12f\") " Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.747515 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22259fa-a96d-4509-9499-a569fe60a39c-operator-scripts\") pod \"e22259fa-a96d-4509-9499-a569fe60a39c\" (UID: \"e22259fa-a96d-4509-9499-a569fe60a39c\") " Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.750258 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81129832-d241-4127-b30b-9a54a350d12f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81129832-d241-4127-b30b-9a54a350d12f" (UID: "81129832-d241-4127-b30b-9a54a350d12f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.750805 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc395a7a-25bb-46cd-89b0-9bbd5b1431f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc395a7a-25bb-46cd-89b0-9bbd5b1431f7" (UID: "fc395a7a-25bb-46cd-89b0-9bbd5b1431f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.751178 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e22259fa-a96d-4509-9499-a569fe60a39c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e22259fa-a96d-4509-9499-a569fe60a39c" (UID: "e22259fa-a96d-4509-9499-a569fe60a39c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.762823 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc395a7a-25bb-46cd-89b0-9bbd5b1431f7-kube-api-access-txgbx" (OuterVolumeSpecName: "kube-api-access-txgbx") pod "fc395a7a-25bb-46cd-89b0-9bbd5b1431f7" (UID: "fc395a7a-25bb-46cd-89b0-9bbd5b1431f7"). InnerVolumeSpecName "kube-api-access-txgbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.763538 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e22259fa-a96d-4509-9499-a569fe60a39c-kube-api-access-tkvtw" (OuterVolumeSpecName: "kube-api-access-tkvtw") pod "e22259fa-a96d-4509-9499-a569fe60a39c" (UID: "e22259fa-a96d-4509-9499-a569fe60a39c"). InnerVolumeSpecName "kube-api-access-tkvtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.774149 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81129832-d241-4127-b30b-9a54a350d12f-kube-api-access-7wf57" (OuterVolumeSpecName: "kube-api-access-7wf57") pod "81129832-d241-4127-b30b-9a54a350d12f" (UID: "81129832-d241-4127-b30b-9a54a350d12f"). InnerVolumeSpecName "kube-api-access-7wf57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.850339 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xblzg\" (UniqueName: \"kubernetes.io/projected/9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439-kube-api-access-xblzg\") pod \"9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439\" (UID: \"9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439\") " Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.850719 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439-operator-scripts\") pod \"9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439\" (UID: \"9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439\") " Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.851741 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc395a7a-25bb-46cd-89b0-9bbd5b1431f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.851830 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81129832-d241-4127-b30b-9a54a350d12f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.851951 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22259fa-a96d-4509-9499-a569fe60a39c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.852028 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wf57\" (UniqueName: \"kubernetes.io/projected/81129832-d241-4127-b30b-9a54a350d12f-kube-api-access-7wf57\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.852105 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txgbx\" (UniqueName: \"kubernetes.io/projected/fc395a7a-25bb-46cd-89b0-9bbd5b1431f7-kube-api-access-txgbx\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.852175 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkvtw\" (UniqueName: \"kubernetes.io/projected/e22259fa-a96d-4509-9499-a569fe60a39c-kube-api-access-tkvtw\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.852728 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439" (UID: "9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.857539 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439-kube-api-access-xblzg" (OuterVolumeSpecName: "kube-api-access-xblzg") pod "9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439" (UID: "9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439"). InnerVolumeSpecName "kube-api-access-xblzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.954168 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xblzg\" (UniqueName: \"kubernetes.io/projected/9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439-kube-api-access-xblzg\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:06 crc kubenswrapper[4895]: I0129 09:03:06.954221 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.440217 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dbcc1d5c-0822-492b-98ce-667e0f13d497","Type":"ContainerStarted","Data":"83ae8f295269939a4e27da8e7d4af6ad5f1dafeb23379d93b0c1d39e440a59d9"} Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.443097 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-105a-account-create-update-s9xqr" Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.443145 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-105a-account-create-update-s9xqr" event={"ID":"9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439","Type":"ContainerDied","Data":"fe5d920ec47cc492fd6670b9cf673448c614005411c95f1dd8cc8bc44c92559b"} Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.443335 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe5d920ec47cc492fd6670b9cf673448c614005411c95f1dd8cc8bc44c92559b" Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.447216 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b5c0-account-create-update-lmlgf" Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.447225 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b5c0-account-create-update-lmlgf" event={"ID":"e22259fa-a96d-4509-9499-a569fe60a39c","Type":"ContainerDied","Data":"6e84bd6cbfa33766bafc717271dc88f08c8bc980bda3a8336b1951cd99c59ddc"} Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.447292 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e84bd6cbfa33766bafc717271dc88f08c8bc980bda3a8336b1951cd99c59ddc" Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.450677 4895 generic.go:334] "Generic (PLEG): container finished" podID="9a9f2123-8dc5-46d6-81ae-802f6e92c3a8" containerID="535780a2dabab5e19d3094544ba540387872911cccd2d16bf2bcfc6cf5da78fd" exitCode=0 Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.450738 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8","Type":"ContainerDied","Data":"535780a2dabab5e19d3094544ba540387872911cccd2d16bf2bcfc6cf5da78fd"} Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.457621 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dpcdv" event={"ID":"81129832-d241-4127-b30b-9a54a350d12f","Type":"ContainerDied","Data":"ccee97bf6829509fa1885a80984118d8a36a1927240547ace745bf4d44488f0d"} Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.457683 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccee97bf6829509fa1885a80984118d8a36a1927240547ace745bf4d44488f0d" Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.457780 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dpcdv" Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.463338 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fnzdv" event={"ID":"fc395a7a-25bb-46cd-89b0-9bbd5b1431f7","Type":"ContainerDied","Data":"756b7875e145334e41f8becfb56df5e5a6a010a60761bd1f75c3b91cd41f1568"} Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.463397 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="756b7875e145334e41f8becfb56df5e5a6a010a60761bd1f75c3b91cd41f1568" Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.463519 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fnzdv" Jan 29 09:03:07 crc kubenswrapper[4895]: I0129 09:03:07.476679 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.476652756 podStartE2EDuration="6.476652756s" podCreationTimestamp="2026-01-29 09:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:03:07.47527688 +0000 UTC m=+1329.116785026" watchObservedRunningTime="2026-01-29 09:03:07.476652756 +0000 UTC m=+1329.118160902" Jan 29 09:03:09 crc kubenswrapper[4895]: I0129 09:03:09.488418 4895 generic.go:334] "Generic (PLEG): container finished" podID="9a9f2123-8dc5-46d6-81ae-802f6e92c3a8" containerID="ba04c9576d2a1e6df7435bfa2410bf604297f85d96cde5f8eb846a53ddca51da" exitCode=0 Jan 29 09:03:09 crc kubenswrapper[4895]: I0129 09:03:09.488521 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8","Type":"ContainerDied","Data":"ba04c9576d2a1e6df7435bfa2410bf604297f85d96cde5f8eb846a53ddca51da"} Jan 29 09:03:10 crc kubenswrapper[4895]: I0129 09:03:10.659029 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8","Type":"ContainerStarted","Data":"435cad0b621caceaae326ccb1d607d4370f5648984392ba66dbfd94e9db440fa"} Jan 29 09:03:11 crc kubenswrapper[4895]: I0129 09:03:11.713247 4895 generic.go:334] "Generic (PLEG): container finished" podID="a3048dd8-2192-435c-a25f-8823906061ac" containerID="8b803d19f0b22a5d8bfed61897b8d03efc523962f712390e93f6205606aa9697" exitCode=0 Jan 29 09:03:11 crc kubenswrapper[4895]: I0129 09:03:11.713391 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3048dd8-2192-435c-a25f-8823906061ac","Type":"ContainerDied","Data":"8b803d19f0b22a5d8bfed61897b8d03efc523962f712390e93f6205606aa9697"} Jan 29 09:03:11 crc kubenswrapper[4895]: I0129 09:03:11.729843 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8","Type":"ContainerStarted","Data":"1da4f4468c5d390df18bad8a94ec9c09a03c9edc8928558ec61db887b1f3907d"} Jan 29 09:03:11 crc kubenswrapper[4895]: I0129 09:03:11.729900 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8","Type":"ContainerStarted","Data":"0ceb723263ca615aa5c87f4aa8fcec721866971426741d648a55130576cd6cbc"} Jan 29 09:03:12 crc kubenswrapper[4895]: I0129 09:03:12.603036 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 09:03:12 crc kubenswrapper[4895]: I0129 09:03:12.603101 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 09:03:12 crc kubenswrapper[4895]: I0129 09:03:12.648185 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 09:03:12 crc kubenswrapper[4895]: I0129 09:03:12.665163 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 09:03:12 crc kubenswrapper[4895]: I0129 09:03:12.799879 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8","Type":"ContainerStarted","Data":"670290d6944f01f373cb347b213637445e57693fad1d54e1a40786d358a4cd31"} Jan 29 09:03:12 crc kubenswrapper[4895]: I0129 09:03:12.799955 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Jan 29 09:03:12 crc kubenswrapper[4895]: I0129 09:03:12.800392 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 09:03:12 crc kubenswrapper[4895]: I0129 09:03:12.802747 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 09:03:12 crc kubenswrapper[4895]: I0129 09:03:12.855092 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=8.8550588 podStartE2EDuration="8.8550588s" podCreationTimestamp="2026-01-29 09:03:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:03:12.85129291 +0000 UTC m=+1334.492801056" watchObservedRunningTime="2026-01-29 09:03:12.8550588 +0000 UTC m=+1334.496566946" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.137162 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6hnzn"] Jan 29 09:03:13 crc kubenswrapper[4895]: E0129 09:03:13.137890 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4260798e-471a-4a37-8a59-e4c5842d7ea5" containerName="mariadb-account-create-update" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.137934 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4260798e-471a-4a37-8a59-e4c5842d7ea5" containerName="mariadb-account-create-update" Jan 29 09:03:13 crc kubenswrapper[4895]: E0129 09:03:13.137963 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439" containerName="mariadb-account-create-update" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.137973 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439" containerName="mariadb-account-create-update" Jan 29 09:03:13 crc kubenswrapper[4895]: E0129 09:03:13.137993 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22259fa-a96d-4509-9499-a569fe60a39c" containerName="mariadb-account-create-update" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.138000 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22259fa-a96d-4509-9499-a569fe60a39c" containerName="mariadb-account-create-update" Jan 29 09:03:13 crc kubenswrapper[4895]: E0129 09:03:13.138016 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad7a906-bb60-46a4-9cd1-edcdbc3db91d" containerName="mariadb-database-create" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.138023 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad7a906-bb60-46a4-9cd1-edcdbc3db91d" containerName="mariadb-database-create" Jan 29 09:03:13 crc kubenswrapper[4895]: E0129 09:03:13.138043 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81129832-d241-4127-b30b-9a54a350d12f" containerName="mariadb-database-create" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.138050 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="81129832-d241-4127-b30b-9a54a350d12f" containerName="mariadb-database-create" Jan 29 09:03:13 crc kubenswrapper[4895]: E0129 09:03:13.138064 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc395a7a-25bb-46cd-89b0-9bbd5b1431f7" containerName="mariadb-database-create" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.138071 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc395a7a-25bb-46cd-89b0-9bbd5b1431f7" containerName="mariadb-database-create" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.138359 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad7a906-bb60-46a4-9cd1-edcdbc3db91d" containerName="mariadb-database-create" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.138380 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439" containerName="mariadb-account-create-update" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.138395 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4260798e-471a-4a37-8a59-e4c5842d7ea5" containerName="mariadb-account-create-update" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.138404 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="81129832-d241-4127-b30b-9a54a350d12f" containerName="mariadb-database-create" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.138421 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e22259fa-a96d-4509-9499-a569fe60a39c" containerName="mariadb-account-create-update" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.138439 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc395a7a-25bb-46cd-89b0-9bbd5b1431f7" containerName="mariadb-database-create" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.139347 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.143431 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.143555 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.144134 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-689gg" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.156333 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6hnzn"] Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.217102 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6hnzn\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.217181 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-config-data\") pod \"nova-cell0-conductor-db-sync-6hnzn\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.217252 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-scripts\") pod \"nova-cell0-conductor-db-sync-6hnzn\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.217274 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq5xw\" (UniqueName: \"kubernetes.io/projected/38a22873-d856-41db-84a7-909eabf0d896-kube-api-access-xq5xw\") pod \"nova-cell0-conductor-db-sync-6hnzn\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.319300 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6hnzn\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.319365 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-config-data\") pod \"nova-cell0-conductor-db-sync-6hnzn\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.319493 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-scripts\") pod \"nova-cell0-conductor-db-sync-6hnzn\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.319529 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq5xw\" (UniqueName: \"kubernetes.io/projected/38a22873-d856-41db-84a7-909eabf0d896-kube-api-access-xq5xw\") pod \"nova-cell0-conductor-db-sync-6hnzn\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.332472 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-scripts\") pod \"nova-cell0-conductor-db-sync-6hnzn\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.332829 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6hnzn\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.343182 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-config-data\") pod \"nova-cell0-conductor-db-sync-6hnzn\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.347998 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq5xw\" (UniqueName: \"kubernetes.io/projected/38a22873-d856-41db-84a7-909eabf0d896-kube-api-access-xq5xw\") pod \"nova-cell0-conductor-db-sync-6hnzn\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:03:13 crc kubenswrapper[4895]: I0129 09:03:13.464007 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:03:14 crc kubenswrapper[4895]: I0129 09:03:14.043716 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6hnzn"] Jan 29 09:03:14 crc kubenswrapper[4895]: W0129 09:03:14.063332 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38a22873_d856_41db_84a7_909eabf0d896.slice/crio-2499e49e2b2f3c8e55286e09a262d0d92d2705a4116587bd0c7c35818f5090ab WatchSource:0}: Error finding container 2499e49e2b2f3c8e55286e09a262d0d92d2705a4116587bd0c7c35818f5090ab: Status 404 returned error can't find the container with id 2499e49e2b2f3c8e55286e09a262d0d92d2705a4116587bd0c7c35818f5090ab Jan 29 09:03:14 crc kubenswrapper[4895]: I0129 09:03:14.068128 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:03:14 crc kubenswrapper[4895]: I0129 09:03:14.827643 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6hnzn" event={"ID":"38a22873-d856-41db-84a7-909eabf0d896","Type":"ContainerStarted","Data":"2499e49e2b2f3c8e55286e09a262d0d92d2705a4116587bd0c7c35818f5090ab"} Jan 29 09:03:15 crc kubenswrapper[4895]: I0129 09:03:15.088382 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Jan 29 09:03:15 crc kubenswrapper[4895]: I0129 09:03:15.088461 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Jan 29 09:03:15 crc kubenswrapper[4895]: I0129 09:03:15.088479 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Jan 29 09:03:15 crc kubenswrapper[4895]: I0129 09:03:15.088492 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Jan 29 09:03:15 crc kubenswrapper[4895]: I0129 09:03:15.110372 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-inspector-0" podUID="9a9f2123-8dc5-46d6-81ae-802f6e92c3a8" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 09:03:15 crc kubenswrapper[4895]: I0129 09:03:15.111303 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-inspector-0" podUID="9a9f2123-8dc5-46d6-81ae-802f6e92c3a8" containerName="ironic-inspector" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 09:03:16 crc kubenswrapper[4895]: I0129 09:03:16.021087 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:03:16 crc kubenswrapper[4895]: I0129 09:03:16.021538 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:03:16 crc kubenswrapper[4895]: I0129 09:03:16.320010 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 09:03:16 crc kubenswrapper[4895]: I0129 09:03:16.320227 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 09:03:16 crc kubenswrapper[4895]: I0129 09:03:16.334581 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 09:03:17 crc kubenswrapper[4895]: I0129 09:03:17.589276 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 09:03:19 crc kubenswrapper[4895]: I0129 09:03:19.972007 4895 generic.go:334] "Generic (PLEG): container finished" podID="9a9f2123-8dc5-46d6-81ae-802f6e92c3a8" containerID="0ceb723263ca615aa5c87f4aa8fcec721866971426741d648a55130576cd6cbc" exitCode=0 Jan 29 09:03:19 crc kubenswrapper[4895]: I0129 09:03:19.972295 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8","Type":"ContainerDied","Data":"0ceb723263ca615aa5c87f4aa8fcec721866971426741d648a55130576cd6cbc"} Jan 29 09:03:19 crc kubenswrapper[4895]: I0129 09:03:19.974067 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Jan 29 09:03:19 crc kubenswrapper[4895]: I0129 09:03:19.974142 4895 scope.go:117] "RemoveContainer" containerID="0ceb723263ca615aa5c87f4aa8fcec721866971426741d648a55130576cd6cbc" Jan 29 09:03:20 crc kubenswrapper[4895]: I0129 09:03:20.088775 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-inspector-0" Jan 29 09:03:20 crc kubenswrapper[4895]: I0129 09:03:20.993827 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8","Type":"ContainerStarted","Data":"48e714b4c11ef3595f4ddf4376fe44c331dcdd61c0a4292d2e03acefb054f5a1"} Jan 29 09:03:25 crc kubenswrapper[4895]: I0129 09:03:25.088493 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Jan 29 09:03:25 crc kubenswrapper[4895]: I0129 09:03:25.089499 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Jan 29 09:03:25 crc kubenswrapper[4895]: I0129 09:03:25.095065 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-inspector-0" podUID="9a9f2123-8dc5-46d6-81ae-802f6e92c3a8" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 09:03:25 crc kubenswrapper[4895]: I0129 09:03:25.097815 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-inspector-0" podUID="9a9f2123-8dc5-46d6-81ae-802f6e92c3a8" containerName="ironic-inspector" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 09:03:29 crc kubenswrapper[4895]: I0129 09:03:29.096611 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6hnzn" event={"ID":"38a22873-d856-41db-84a7-909eabf0d896","Type":"ContainerStarted","Data":"bd723a998f06f1f7a27c7628406e6dfe66c3ac9c86a7a9bcc9f82084e02b056a"} Jan 29 09:03:29 crc kubenswrapper[4895]: I0129 09:03:29.108447 4895 generic.go:334] "Generic (PLEG): container finished" podID="9a9f2123-8dc5-46d6-81ae-802f6e92c3a8" containerID="48e714b4c11ef3595f4ddf4376fe44c331dcdd61c0a4292d2e03acefb054f5a1" exitCode=0 Jan 29 09:03:29 crc kubenswrapper[4895]: I0129 09:03:29.108541 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8","Type":"ContainerDied","Data":"48e714b4c11ef3595f4ddf4376fe44c331dcdd61c0a4292d2e03acefb054f5a1"} Jan 29 09:03:29 crc kubenswrapper[4895]: I0129 09:03:29.108616 4895 scope.go:117] "RemoveContainer" containerID="0ceb723263ca615aa5c87f4aa8fcec721866971426741d648a55130576cd6cbc" Jan 29 09:03:29 crc kubenswrapper[4895]: I0129 09:03:29.109840 4895 scope.go:117] "RemoveContainer" containerID="48e714b4c11ef3595f4ddf4376fe44c331dcdd61c0a4292d2e03acefb054f5a1" Jan 29 09:03:29 crc kubenswrapper[4895]: E0129 09:03:29.110167 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-inspector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-inspector pod=ironic-inspector-0_openstack(9a9f2123-8dc5-46d6-81ae-802f6e92c3a8)\"" pod="openstack/ironic-inspector-0" podUID="9a9f2123-8dc5-46d6-81ae-802f6e92c3a8" Jan 29 09:03:29 crc kubenswrapper[4895]: I0129 09:03:29.131114 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-6hnzn" podStartSLOduration=1.91306745 podStartE2EDuration="16.131062716s" podCreationTimestamp="2026-01-29 09:03:13 +0000 UTC" firstStartedPulling="2026-01-29 09:03:14.067777864 +0000 UTC m=+1335.709286010" lastFinishedPulling="2026-01-29 09:03:28.28577313 +0000 UTC m=+1349.927281276" observedRunningTime="2026-01-29 09:03:29.123239336 +0000 UTC m=+1350.764747482" watchObservedRunningTime="2026-01-29 09:03:29.131062716 +0000 UTC m=+1350.772570862" Jan 29 09:03:30 crc kubenswrapper[4895]: I0129 09:03:30.089227 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-inspector-0" Jan 29 09:03:30 crc kubenswrapper[4895]: I0129 09:03:30.127298 4895 scope.go:117] "RemoveContainer" containerID="48e714b4c11ef3595f4ddf4376fe44c331dcdd61c0a4292d2e03acefb054f5a1" Jan 29 09:03:30 crc kubenswrapper[4895]: E0129 09:03:30.127636 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-inspector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-inspector pod=ironic-inspector-0_openstack(9a9f2123-8dc5-46d6-81ae-802f6e92c3a8)\"" pod="openstack/ironic-inspector-0" podUID="9a9f2123-8dc5-46d6-81ae-802f6e92c3a8" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.152108 4895 generic.go:334] "Generic (PLEG): container finished" podID="a3048dd8-2192-435c-a25f-8823906061ac" containerID="dd4a9e73d8b336493ad063e592ca33f69b8f5458d6749886a2434a6405b62b5c" exitCode=137 Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.152224 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3048dd8-2192-435c-a25f-8823906061ac","Type":"ContainerDied","Data":"dd4a9e73d8b336493ad063e592ca33f69b8f5458d6749886a2434a6405b62b5c"} Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.246904 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.402108 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmcbn\" (UniqueName: \"kubernetes.io/projected/a3048dd8-2192-435c-a25f-8823906061ac-kube-api-access-mmcbn\") pod \"a3048dd8-2192-435c-a25f-8823906061ac\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.402275 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3048dd8-2192-435c-a25f-8823906061ac-run-httpd\") pod \"a3048dd8-2192-435c-a25f-8823906061ac\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.402339 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-combined-ca-bundle\") pod \"a3048dd8-2192-435c-a25f-8823906061ac\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.402449 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-scripts\") pod \"a3048dd8-2192-435c-a25f-8823906061ac\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.402616 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-config-data\") pod \"a3048dd8-2192-435c-a25f-8823906061ac\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.402726 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3048dd8-2192-435c-a25f-8823906061ac-log-httpd\") pod \"a3048dd8-2192-435c-a25f-8823906061ac\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.402832 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-sg-core-conf-yaml\") pod \"a3048dd8-2192-435c-a25f-8823906061ac\" (UID: \"a3048dd8-2192-435c-a25f-8823906061ac\") " Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.403336 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3048dd8-2192-435c-a25f-8823906061ac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a3048dd8-2192-435c-a25f-8823906061ac" (UID: "a3048dd8-2192-435c-a25f-8823906061ac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.404326 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3048dd8-2192-435c-a25f-8823906061ac-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.404693 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3048dd8-2192-435c-a25f-8823906061ac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a3048dd8-2192-435c-a25f-8823906061ac" (UID: "a3048dd8-2192-435c-a25f-8823906061ac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.411207 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-scripts" (OuterVolumeSpecName: "scripts") pod "a3048dd8-2192-435c-a25f-8823906061ac" (UID: "a3048dd8-2192-435c-a25f-8823906061ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.414203 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3048dd8-2192-435c-a25f-8823906061ac-kube-api-access-mmcbn" (OuterVolumeSpecName: "kube-api-access-mmcbn") pod "a3048dd8-2192-435c-a25f-8823906061ac" (UID: "a3048dd8-2192-435c-a25f-8823906061ac"). InnerVolumeSpecName "kube-api-access-mmcbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.441048 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a3048dd8-2192-435c-a25f-8823906061ac" (UID: "a3048dd8-2192-435c-a25f-8823906061ac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.507096 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.507133 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmcbn\" (UniqueName: \"kubernetes.io/projected/a3048dd8-2192-435c-a25f-8823906061ac-kube-api-access-mmcbn\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.507148 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3048dd8-2192-435c-a25f-8823906061ac-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.507158 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.517231 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3048dd8-2192-435c-a25f-8823906061ac" (UID: "a3048dd8-2192-435c-a25f-8823906061ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.535058 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-config-data" (OuterVolumeSpecName: "config-data") pod "a3048dd8-2192-435c-a25f-8823906061ac" (UID: "a3048dd8-2192-435c-a25f-8823906061ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.609598 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:32 crc kubenswrapper[4895]: I0129 09:03:32.609644 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3048dd8-2192-435c-a25f-8823906061ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.169996 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3048dd8-2192-435c-a25f-8823906061ac","Type":"ContainerDied","Data":"674bcf1a04be26187c4860edb25d417c1d5e158abe430bb6e9db5fbb792f2034"} Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.170310 4895 scope.go:117] "RemoveContainer" containerID="dd4a9e73d8b336493ad063e592ca33f69b8f5458d6749886a2434a6405b62b5c" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.170083 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.216675 4895 scope.go:117] "RemoveContainer" containerID="241f0b80a30c71297e463e6c07f71a2b4b1e52ed8a4ad6ffb7ebc800dfdb4745" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.244296 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.244345 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.244533 4895 scope.go:117] "RemoveContainer" containerID="198155e5d6b3ab05adf4ec1b2deda980a6bc062a8baa4401f53a16fdffbfa3cc" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.268374 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:03:33 crc kubenswrapper[4895]: E0129 09:03:33.269193 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="sg-core" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.269222 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="sg-core" Jan 29 09:03:33 crc kubenswrapper[4895]: E0129 09:03:33.269262 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="ceilometer-notification-agent" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.269275 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="ceilometer-notification-agent" Jan 29 09:03:33 crc kubenswrapper[4895]: E0129 09:03:33.269303 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="proxy-httpd" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.269313 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="proxy-httpd" Jan 29 09:03:33 crc kubenswrapper[4895]: E0129 09:03:33.269330 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="ceilometer-central-agent" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.269340 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="ceilometer-central-agent" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.269644 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="ceilometer-notification-agent" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.269670 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="sg-core" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.269685 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="ceilometer-central-agent" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.269717 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3048dd8-2192-435c-a25f-8823906061ac" containerName="proxy-httpd" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.274002 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.277485 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.277784 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.281864 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.285680 4895 scope.go:117] "RemoveContainer" containerID="8b803d19f0b22a5d8bfed61897b8d03efc523962f712390e93f6205606aa9697" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.432862 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-config-data\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.432963 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.433006 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xf6p\" (UniqueName: \"kubernetes.io/projected/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-kube-api-access-6xf6p\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.433076 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-log-httpd\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.433129 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.433160 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-run-httpd\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.433223 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-scripts\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.535464 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-config-data\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.535829 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.535942 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xf6p\" (UniqueName: \"kubernetes.io/projected/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-kube-api-access-6xf6p\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.536121 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-log-httpd\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.536217 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.536302 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-run-httpd\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.536448 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-scripts\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.536654 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-log-httpd\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.536980 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-run-httpd\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.541955 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-scripts\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.542581 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.544822 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-config-data\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.553258 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.557578 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xf6p\" (UniqueName: \"kubernetes.io/projected/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-kube-api-access-6xf6p\") pod \"ceilometer-0\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " pod="openstack/ceilometer-0" Jan 29 09:03:33 crc kubenswrapper[4895]: I0129 09:03:33.602143 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:03:34 crc kubenswrapper[4895]: W0129 09:03:34.083876 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2f6e8ae_727a_4f6d_b4f4_5f3b88e08ccb.slice/crio-0652bcb18f5364691b7510bcbba6caf41a117dc13b570d42abc90c6e3e7fea4f WatchSource:0}: Error finding container 0652bcb18f5364691b7510bcbba6caf41a117dc13b570d42abc90c6e3e7fea4f: Status 404 returned error can't find the container with id 0652bcb18f5364691b7510bcbba6caf41a117dc13b570d42abc90c6e3e7fea4f Jan 29 09:03:34 crc kubenswrapper[4895]: I0129 09:03:34.093466 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:03:34 crc kubenswrapper[4895]: I0129 09:03:34.187638 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb","Type":"ContainerStarted","Data":"0652bcb18f5364691b7510bcbba6caf41a117dc13b570d42abc90c6e3e7fea4f"} Jan 29 09:03:35 crc kubenswrapper[4895]: I0129 09:03:35.089541 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Jan 29 09:03:35 crc kubenswrapper[4895]: I0129 09:03:35.091342 4895 scope.go:117] "RemoveContainer" containerID="48e714b4c11ef3595f4ddf4376fe44c331dcdd61c0a4292d2e03acefb054f5a1" Jan 29 09:03:35 crc kubenswrapper[4895]: E0129 09:03:35.091941 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-inspector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-inspector pod=ironic-inspector-0_openstack(9a9f2123-8dc5-46d6-81ae-802f6e92c3a8)\"" pod="openstack/ironic-inspector-0" podUID="9a9f2123-8dc5-46d6-81ae-802f6e92c3a8" Jan 29 09:03:35 crc kubenswrapper[4895]: I0129 09:03:35.096514 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-inspector-0" podUID="9a9f2123-8dc5-46d6-81ae-802f6e92c3a8" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 09:03:35 crc kubenswrapper[4895]: I0129 09:03:35.200977 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb","Type":"ContainerStarted","Data":"c97e8d99918e397ff5c711f575a39c8defc7da4e046cdcb42eb09b29a4a86314"} Jan 29 09:03:35 crc kubenswrapper[4895]: I0129 09:03:35.240790 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3048dd8-2192-435c-a25f-8823906061ac" path="/var/lib/kubelet/pods/a3048dd8-2192-435c-a25f-8823906061ac/volumes" Jan 29 09:03:36 crc kubenswrapper[4895]: I0129 09:03:36.592999 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb","Type":"ContainerStarted","Data":"af5c67dbb6a4dbef34812e8e42b99e89b00fe95d060afd8a48521a9f4f847d0c"} Jan 29 09:03:41 crc kubenswrapper[4895]: I0129 09:03:41.654984 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb","Type":"ContainerStarted","Data":"25aab2c22828c48420eae84a7c42d18d967c8ce68862ef81065a6f9574686fb7"} Jan 29 09:03:45 crc kubenswrapper[4895]: I0129 09:03:45.095540 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-inspector-0" podUID="9a9f2123-8dc5-46d6-81ae-802f6e92c3a8" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 09:03:46 crc kubenswrapper[4895]: I0129 09:03:46.021137 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:03:46 crc kubenswrapper[4895]: I0129 09:03:46.021212 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:03:46 crc kubenswrapper[4895]: I0129 09:03:46.212321 4895 scope.go:117] "RemoveContainer" containerID="48e714b4c11ef3595f4ddf4376fe44c331dcdd61c0a4292d2e03acefb054f5a1" Jan 29 09:03:47 crc kubenswrapper[4895]: I0129 09:03:47.726529 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb","Type":"ContainerStarted","Data":"e83fa7c8255e9bc2343f1c7b94438f1190eb43ce85e15fd50d511918cca148b4"} Jan 29 09:03:47 crc kubenswrapper[4895]: I0129 09:03:47.727303 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:03:47 crc kubenswrapper[4895]: I0129 09:03:47.731316 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"9a9f2123-8dc5-46d6-81ae-802f6e92c3a8","Type":"ContainerStarted","Data":"a53da50cce272bc798309430e8363dcc84a6f9643f2c4833ca0f6b7065a9c34e"} Jan 29 09:03:47 crc kubenswrapper[4895]: I0129 09:03:47.759438 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8635491069999999 podStartE2EDuration="14.759400881s" podCreationTimestamp="2026-01-29 09:03:33 +0000 UTC" firstStartedPulling="2026-01-29 09:03:34.088835346 +0000 UTC m=+1355.730343492" lastFinishedPulling="2026-01-29 09:03:46.98468712 +0000 UTC m=+1368.626195266" observedRunningTime="2026-01-29 09:03:47.757417418 +0000 UTC m=+1369.398925564" watchObservedRunningTime="2026-01-29 09:03:47.759400881 +0000 UTC m=+1369.400909047" Jan 29 09:03:50 crc kubenswrapper[4895]: I0129 09:03:50.088958 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Jan 29 09:03:53 crc kubenswrapper[4895]: I0129 09:03:53.888180 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:03:53 crc kubenswrapper[4895]: I0129 09:03:53.889372 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="ceilometer-central-agent" containerID="cri-o://c97e8d99918e397ff5c711f575a39c8defc7da4e046cdcb42eb09b29a4a86314" gracePeriod=30 Jan 29 09:03:53 crc kubenswrapper[4895]: I0129 09:03:53.889404 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="proxy-httpd" containerID="cri-o://e83fa7c8255e9bc2343f1c7b94438f1190eb43ce85e15fd50d511918cca148b4" gracePeriod=30 Jan 29 09:03:53 crc kubenswrapper[4895]: I0129 09:03:53.889536 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="ceilometer-notification-agent" containerID="cri-o://af5c67dbb6a4dbef34812e8e42b99e89b00fe95d060afd8a48521a9f4f847d0c" gracePeriod=30 Jan 29 09:03:53 crc kubenswrapper[4895]: I0129 09:03:53.889555 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="sg-core" containerID="cri-o://25aab2c22828c48420eae84a7c42d18d967c8ce68862ef81065a6f9574686fb7" gracePeriod=30 Jan 29 09:03:54 crc kubenswrapper[4895]: I0129 09:03:54.811661 4895 generic.go:334] "Generic (PLEG): container finished" podID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerID="e83fa7c8255e9bc2343f1c7b94438f1190eb43ce85e15fd50d511918cca148b4" exitCode=0 Jan 29 09:03:54 crc kubenswrapper[4895]: I0129 09:03:54.812047 4895 generic.go:334] "Generic (PLEG): container finished" podID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerID="25aab2c22828c48420eae84a7c42d18d967c8ce68862ef81065a6f9574686fb7" exitCode=2 Jan 29 09:03:54 crc kubenswrapper[4895]: I0129 09:03:54.812078 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb","Type":"ContainerDied","Data":"e83fa7c8255e9bc2343f1c7b94438f1190eb43ce85e15fd50d511918cca148b4"} Jan 29 09:03:54 crc kubenswrapper[4895]: I0129 09:03:54.812116 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb","Type":"ContainerDied","Data":"25aab2c22828c48420eae84a7c42d18d967c8ce68862ef81065a6f9574686fb7"} Jan 29 09:03:55 crc kubenswrapper[4895]: I0129 09:03:55.089066 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Jan 29 09:03:55 crc kubenswrapper[4895]: I0129 09:03:55.118163 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Jan 29 09:03:55 crc kubenswrapper[4895]: I0129 09:03:55.119737 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Jan 29 09:03:55 crc kubenswrapper[4895]: I0129 09:03:55.832874 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Jan 29 09:03:55 crc kubenswrapper[4895]: I0129 09:03:55.840382 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Jan 29 09:03:58 crc kubenswrapper[4895]: I0129 09:03:58.877694 4895 generic.go:334] "Generic (PLEG): container finished" podID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerID="af5c67dbb6a4dbef34812e8e42b99e89b00fe95d060afd8a48521a9f4f847d0c" exitCode=0 Jan 29 09:03:58 crc kubenswrapper[4895]: I0129 09:03:58.877781 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb","Type":"ContainerDied","Data":"af5c67dbb6a4dbef34812e8e42b99e89b00fe95d060afd8a48521a9f4f847d0c"} Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.476406 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.645294 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-combined-ca-bundle\") pod \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.645941 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-log-httpd\") pod \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.646163 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-config-data\") pod \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.646243 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xf6p\" (UniqueName: \"kubernetes.io/projected/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-kube-api-access-6xf6p\") pod \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.646396 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-sg-core-conf-yaml\") pod \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.646447 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-scripts\") pod \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.646486 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-run-httpd\") pod \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\" (UID: \"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb\") " Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.646766 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" (UID: "a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.648480 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.648996 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" (UID: "a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.654384 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-kube-api-access-6xf6p" (OuterVolumeSpecName: "kube-api-access-6xf6p") pod "a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" (UID: "a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb"). InnerVolumeSpecName "kube-api-access-6xf6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.676963 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-scripts" (OuterVolumeSpecName: "scripts") pod "a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" (UID: "a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.700605 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" (UID: "a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.751443 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xf6p\" (UniqueName: \"kubernetes.io/projected/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-kube-api-access-6xf6p\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.751482 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.751493 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.751505 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.763483 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" (UID: "a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.804409 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-config-data" (OuterVolumeSpecName: "config-data") pod "a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" (UID: "a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.855306 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.855354 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.936590 4895 generic.go:334] "Generic (PLEG): container finished" podID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerID="c97e8d99918e397ff5c711f575a39c8defc7da4e046cdcb42eb09b29a4a86314" exitCode=0 Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.936670 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb","Type":"ContainerDied","Data":"c97e8d99918e397ff5c711f575a39c8defc7da4e046cdcb42eb09b29a4a86314"} Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.936716 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb","Type":"ContainerDied","Data":"0652bcb18f5364691b7510bcbba6caf41a117dc13b570d42abc90c6e3e7fea4f"} Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.936714 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.936737 4895 scope.go:117] "RemoveContainer" containerID="e83fa7c8255e9bc2343f1c7b94438f1190eb43ce85e15fd50d511918cca148b4" Jan 29 09:04:00 crc kubenswrapper[4895]: I0129 09:04:00.968544 4895 scope.go:117] "RemoveContainer" containerID="25aab2c22828c48420eae84a7c42d18d967c8ce68862ef81065a6f9574686fb7" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.012035 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.034376 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.052225 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:04:01 crc kubenswrapper[4895]: E0129 09:04:01.052966 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="proxy-httpd" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.052994 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="proxy-httpd" Jan 29 09:04:01 crc kubenswrapper[4895]: E0129 09:04:01.053029 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="ceilometer-notification-agent" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.053039 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="ceilometer-notification-agent" Jan 29 09:04:01 crc kubenswrapper[4895]: E0129 09:04:01.053120 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="ceilometer-central-agent" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.053134 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="ceilometer-central-agent" Jan 29 09:04:01 crc kubenswrapper[4895]: E0129 09:04:01.053154 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="sg-core" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.053161 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="sg-core" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.053437 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="ceilometer-notification-agent" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.053462 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="proxy-httpd" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.053475 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="ceilometer-central-agent" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.053485 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" containerName="sg-core" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.061645 4895 scope.go:117] "RemoveContainer" containerID="af5c67dbb6a4dbef34812e8e42b99e89b00fe95d060afd8a48521a9f4f847d0c" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.077841 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.078016 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.084373 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.084861 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.148171 4895 scope.go:117] "RemoveContainer" containerID="c97e8d99918e397ff5c711f575a39c8defc7da4e046cdcb42eb09b29a4a86314" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.161372 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-scripts\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.161648 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.161764 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a43fe081-4498-49ae-8dbb-a6068aadfb06-run-httpd\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.161828 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnfhz\" (UniqueName: \"kubernetes.io/projected/a43fe081-4498-49ae-8dbb-a6068aadfb06-kube-api-access-cnfhz\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.162871 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-config-data\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.163252 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a43fe081-4498-49ae-8dbb-a6068aadfb06-log-httpd\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.163353 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.196379 4895 scope.go:117] "RemoveContainer" containerID="e83fa7c8255e9bc2343f1c7b94438f1190eb43ce85e15fd50d511918cca148b4" Jan 29 09:04:01 crc kubenswrapper[4895]: E0129 09:04:01.197475 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e83fa7c8255e9bc2343f1c7b94438f1190eb43ce85e15fd50d511918cca148b4\": container with ID starting with e83fa7c8255e9bc2343f1c7b94438f1190eb43ce85e15fd50d511918cca148b4 not found: ID does not exist" containerID="e83fa7c8255e9bc2343f1c7b94438f1190eb43ce85e15fd50d511918cca148b4" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.197566 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e83fa7c8255e9bc2343f1c7b94438f1190eb43ce85e15fd50d511918cca148b4"} err="failed to get container status \"e83fa7c8255e9bc2343f1c7b94438f1190eb43ce85e15fd50d511918cca148b4\": rpc error: code = NotFound desc = could not find container \"e83fa7c8255e9bc2343f1c7b94438f1190eb43ce85e15fd50d511918cca148b4\": container with ID starting with e83fa7c8255e9bc2343f1c7b94438f1190eb43ce85e15fd50d511918cca148b4 not found: ID does not exist" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.197605 4895 scope.go:117] "RemoveContainer" containerID="25aab2c22828c48420eae84a7c42d18d967c8ce68862ef81065a6f9574686fb7" Jan 29 09:04:01 crc kubenswrapper[4895]: E0129 09:04:01.199387 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25aab2c22828c48420eae84a7c42d18d967c8ce68862ef81065a6f9574686fb7\": container with ID starting with 25aab2c22828c48420eae84a7c42d18d967c8ce68862ef81065a6f9574686fb7 not found: ID does not exist" containerID="25aab2c22828c48420eae84a7c42d18d967c8ce68862ef81065a6f9574686fb7" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.199461 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25aab2c22828c48420eae84a7c42d18d967c8ce68862ef81065a6f9574686fb7"} err="failed to get container status \"25aab2c22828c48420eae84a7c42d18d967c8ce68862ef81065a6f9574686fb7\": rpc error: code = NotFound desc = could not find container \"25aab2c22828c48420eae84a7c42d18d967c8ce68862ef81065a6f9574686fb7\": container with ID starting with 25aab2c22828c48420eae84a7c42d18d967c8ce68862ef81065a6f9574686fb7 not found: ID does not exist" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.199513 4895 scope.go:117] "RemoveContainer" containerID="af5c67dbb6a4dbef34812e8e42b99e89b00fe95d060afd8a48521a9f4f847d0c" Jan 29 09:04:01 crc kubenswrapper[4895]: E0129 09:04:01.200241 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5c67dbb6a4dbef34812e8e42b99e89b00fe95d060afd8a48521a9f4f847d0c\": container with ID starting with af5c67dbb6a4dbef34812e8e42b99e89b00fe95d060afd8a48521a9f4f847d0c not found: ID does not exist" containerID="af5c67dbb6a4dbef34812e8e42b99e89b00fe95d060afd8a48521a9f4f847d0c" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.200371 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5c67dbb6a4dbef34812e8e42b99e89b00fe95d060afd8a48521a9f4f847d0c"} err="failed to get container status \"af5c67dbb6a4dbef34812e8e42b99e89b00fe95d060afd8a48521a9f4f847d0c\": rpc error: code = NotFound desc = could not find container \"af5c67dbb6a4dbef34812e8e42b99e89b00fe95d060afd8a48521a9f4f847d0c\": container with ID starting with af5c67dbb6a4dbef34812e8e42b99e89b00fe95d060afd8a48521a9f4f847d0c not found: ID does not exist" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.200440 4895 scope.go:117] "RemoveContainer" containerID="c97e8d99918e397ff5c711f575a39c8defc7da4e046cdcb42eb09b29a4a86314" Jan 29 09:04:01 crc kubenswrapper[4895]: E0129 09:04:01.201841 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c97e8d99918e397ff5c711f575a39c8defc7da4e046cdcb42eb09b29a4a86314\": container with ID starting with c97e8d99918e397ff5c711f575a39c8defc7da4e046cdcb42eb09b29a4a86314 not found: ID does not exist" containerID="c97e8d99918e397ff5c711f575a39c8defc7da4e046cdcb42eb09b29a4a86314" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.201904 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97e8d99918e397ff5c711f575a39c8defc7da4e046cdcb42eb09b29a4a86314"} err="failed to get container status \"c97e8d99918e397ff5c711f575a39c8defc7da4e046cdcb42eb09b29a4a86314\": rpc error: code = NotFound desc = could not find container \"c97e8d99918e397ff5c711f575a39c8defc7da4e046cdcb42eb09b29a4a86314\": container with ID starting with c97e8d99918e397ff5c711f575a39c8defc7da4e046cdcb42eb09b29a4a86314 not found: ID does not exist" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.226207 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb" path="/var/lib/kubelet/pods/a2f6e8ae-727a-4f6d-b4f4-5f3b88e08ccb/volumes" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.265907 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-scripts\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.266005 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.266046 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a43fe081-4498-49ae-8dbb-a6068aadfb06-run-httpd\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.266069 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnfhz\" (UniqueName: \"kubernetes.io/projected/a43fe081-4498-49ae-8dbb-a6068aadfb06-kube-api-access-cnfhz\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.266109 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-config-data\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.266188 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a43fe081-4498-49ae-8dbb-a6068aadfb06-log-httpd\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.266229 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.266632 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a43fe081-4498-49ae-8dbb-a6068aadfb06-run-httpd\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.266966 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a43fe081-4498-49ae-8dbb-a6068aadfb06-log-httpd\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.272276 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-scripts\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.272928 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.272962 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.274067 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-config-data\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.289352 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnfhz\" (UniqueName: \"kubernetes.io/projected/a43fe081-4498-49ae-8dbb-a6068aadfb06-kube-api-access-cnfhz\") pod \"ceilometer-0\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.428261 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.950304 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.958031 4895 generic.go:334] "Generic (PLEG): container finished" podID="38a22873-d856-41db-84a7-909eabf0d896" containerID="bd723a998f06f1f7a27c7628406e6dfe66c3ac9c86a7a9bcc9f82084e02b056a" exitCode=0 Jan 29 09:04:01 crc kubenswrapper[4895]: I0129 09:04:01.958101 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6hnzn" event={"ID":"38a22873-d856-41db-84a7-909eabf0d896","Type":"ContainerDied","Data":"bd723a998f06f1f7a27c7628406e6dfe66c3ac9c86a7a9bcc9f82084e02b056a"} Jan 29 09:04:02 crc kubenswrapper[4895]: I0129 09:04:02.996548 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a43fe081-4498-49ae-8dbb-a6068aadfb06","Type":"ContainerStarted","Data":"bdb3a1630521d36010852c4c374d64f7a087490af366a8b0975463550050de1b"} Jan 29 09:04:03 crc kubenswrapper[4895]: I0129 09:04:03.408898 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:04:03 crc kubenswrapper[4895]: I0129 09:04:03.524956 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq5xw\" (UniqueName: \"kubernetes.io/projected/38a22873-d856-41db-84a7-909eabf0d896-kube-api-access-xq5xw\") pod \"38a22873-d856-41db-84a7-909eabf0d896\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " Jan 29 09:04:03 crc kubenswrapper[4895]: I0129 09:04:03.525058 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-combined-ca-bundle\") pod \"38a22873-d856-41db-84a7-909eabf0d896\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " Jan 29 09:04:03 crc kubenswrapper[4895]: I0129 09:04:03.525096 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-scripts\") pod \"38a22873-d856-41db-84a7-909eabf0d896\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " Jan 29 09:04:03 crc kubenswrapper[4895]: I0129 09:04:03.525198 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-config-data\") pod \"38a22873-d856-41db-84a7-909eabf0d896\" (UID: \"38a22873-d856-41db-84a7-909eabf0d896\") " Jan 29 09:04:03 crc kubenswrapper[4895]: I0129 09:04:03.532861 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a22873-d856-41db-84a7-909eabf0d896-kube-api-access-xq5xw" (OuterVolumeSpecName: "kube-api-access-xq5xw") pod "38a22873-d856-41db-84a7-909eabf0d896" (UID: "38a22873-d856-41db-84a7-909eabf0d896"). InnerVolumeSpecName "kube-api-access-xq5xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:03 crc kubenswrapper[4895]: I0129 09:04:03.556210 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-scripts" (OuterVolumeSpecName: "scripts") pod "38a22873-d856-41db-84a7-909eabf0d896" (UID: "38a22873-d856-41db-84a7-909eabf0d896"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:03 crc kubenswrapper[4895]: I0129 09:04:03.592815 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-config-data" (OuterVolumeSpecName: "config-data") pod "38a22873-d856-41db-84a7-909eabf0d896" (UID: "38a22873-d856-41db-84a7-909eabf0d896"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:03 crc kubenswrapper[4895]: I0129 09:04:03.597759 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38a22873-d856-41db-84a7-909eabf0d896" (UID: "38a22873-d856-41db-84a7-909eabf0d896"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:03 crc kubenswrapper[4895]: I0129 09:04:03.628137 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq5xw\" (UniqueName: \"kubernetes.io/projected/38a22873-d856-41db-84a7-909eabf0d896-kube-api-access-xq5xw\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:03 crc kubenswrapper[4895]: I0129 09:04:03.628192 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:03 crc kubenswrapper[4895]: I0129 09:04:03.628208 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:03 crc kubenswrapper[4895]: I0129 09:04:03.628222 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a22873-d856-41db-84a7-909eabf0d896-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.021822 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a43fe081-4498-49ae-8dbb-a6068aadfb06","Type":"ContainerStarted","Data":"40d7d2a28048f4c5b2109fcf99283eeac6dacb8837828e255ea08022393a1069"} Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.030696 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6hnzn" event={"ID":"38a22873-d856-41db-84a7-909eabf0d896","Type":"ContainerDied","Data":"2499e49e2b2f3c8e55286e09a262d0d92d2705a4116587bd0c7c35818f5090ab"} Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.030759 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2499e49e2b2f3c8e55286e09a262d0d92d2705a4116587bd0c7c35818f5090ab" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.030870 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6hnzn" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.113444 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:04:04 crc kubenswrapper[4895]: E0129 09:04:04.114391 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a22873-d856-41db-84a7-909eabf0d896" containerName="nova-cell0-conductor-db-sync" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.114415 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a22873-d856-41db-84a7-909eabf0d896" containerName="nova-cell0-conductor-db-sync" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.114675 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a22873-d856-41db-84a7-909eabf0d896" containerName="nova-cell0-conductor-db-sync" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.115572 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.118490 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-689gg" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.118768 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.135819 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.250668 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633d9018-c7c7-420f-9b03-6c983a5c40b4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"633d9018-c7c7-420f-9b03-6c983a5c40b4\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.251070 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633d9018-c7c7-420f-9b03-6c983a5c40b4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"633d9018-c7c7-420f-9b03-6c983a5c40b4\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.251230 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgcxm\" (UniqueName: \"kubernetes.io/projected/633d9018-c7c7-420f-9b03-6c983a5c40b4-kube-api-access-vgcxm\") pod \"nova-cell0-conductor-0\" (UID: \"633d9018-c7c7-420f-9b03-6c983a5c40b4\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.353553 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633d9018-c7c7-420f-9b03-6c983a5c40b4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"633d9018-c7c7-420f-9b03-6c983a5c40b4\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.354012 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgcxm\" (UniqueName: \"kubernetes.io/projected/633d9018-c7c7-420f-9b03-6c983a5c40b4-kube-api-access-vgcxm\") pod \"nova-cell0-conductor-0\" (UID: \"633d9018-c7c7-420f-9b03-6c983a5c40b4\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.354251 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633d9018-c7c7-420f-9b03-6c983a5c40b4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"633d9018-c7c7-420f-9b03-6c983a5c40b4\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.361108 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633d9018-c7c7-420f-9b03-6c983a5c40b4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"633d9018-c7c7-420f-9b03-6c983a5c40b4\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.373724 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgcxm\" (UniqueName: \"kubernetes.io/projected/633d9018-c7c7-420f-9b03-6c983a5c40b4-kube-api-access-vgcxm\") pod \"nova-cell0-conductor-0\" (UID: \"633d9018-c7c7-420f-9b03-6c983a5c40b4\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.375481 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/633d9018-c7c7-420f-9b03-6c983a5c40b4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"633d9018-c7c7-420f-9b03-6c983a5c40b4\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.459743 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 09:04:04 crc kubenswrapper[4895]: I0129 09:04:04.964426 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:04:04 crc kubenswrapper[4895]: W0129 09:04:04.986434 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod633d9018_c7c7_420f_9b03_6c983a5c40b4.slice/crio-aae9f45ee0e444974ed66e1ec74fcdd74c725c9c3208bb7182951ae7fd479b5f WatchSource:0}: Error finding container aae9f45ee0e444974ed66e1ec74fcdd74c725c9c3208bb7182951ae7fd479b5f: Status 404 returned error can't find the container with id aae9f45ee0e444974ed66e1ec74fcdd74c725c9c3208bb7182951ae7fd479b5f Jan 29 09:04:05 crc kubenswrapper[4895]: I0129 09:04:05.047052 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a43fe081-4498-49ae-8dbb-a6068aadfb06","Type":"ContainerStarted","Data":"899a094a20448dc2bccb6eb2d248a52ab39e84bbc1ca2a0b16cb6a9cb1bba65f"} Jan 29 09:04:05 crc kubenswrapper[4895]: I0129 09:04:05.048571 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a43fe081-4498-49ae-8dbb-a6068aadfb06","Type":"ContainerStarted","Data":"8bc8e26b49a1d1a10666ea7e88a0eaaee0c4ab11f5e437464ccdac53743fa15e"} Jan 29 09:04:05 crc kubenswrapper[4895]: I0129 09:04:05.049022 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"633d9018-c7c7-420f-9b03-6c983a5c40b4","Type":"ContainerStarted","Data":"aae9f45ee0e444974ed66e1ec74fcdd74c725c9c3208bb7182951ae7fd479b5f"} Jan 29 09:04:06 crc kubenswrapper[4895]: I0129 09:04:06.063047 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"633d9018-c7c7-420f-9b03-6c983a5c40b4","Type":"ContainerStarted","Data":"488c3070d8a190d35e3ea4b1166ed36cdd9bea70586152aaef9d3d3e9eeb3fa3"} Jan 29 09:04:06 crc kubenswrapper[4895]: I0129 09:04:06.063468 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 09:04:06 crc kubenswrapper[4895]: I0129 09:04:06.093318 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.093287734 podStartE2EDuration="2.093287734s" podCreationTimestamp="2026-01-29 09:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:04:06.081858819 +0000 UTC m=+1387.723366965" watchObservedRunningTime="2026-01-29 09:04:06.093287734 +0000 UTC m=+1387.734795880" Jan 29 09:04:08 crc kubenswrapper[4895]: I0129 09:04:08.090656 4895 generic.go:334] "Generic (PLEG): container finished" podID="f893b3e3-3833-4a94-ab55-951f600fdadd" containerID="f3880d5648bd6779f6ac04a3d3c5267cf390a0ee9a3b7842f23a494ddce96f89" exitCode=0 Jan 29 09:04:08 crc kubenswrapper[4895]: I0129 09:04:08.091345 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f893b3e3-3833-4a94-ab55-951f600fdadd","Type":"ContainerDied","Data":"f3880d5648bd6779f6ac04a3d3c5267cf390a0ee9a3b7842f23a494ddce96f89"} Jan 29 09:04:09 crc kubenswrapper[4895]: I0129 09:04:09.107350 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a43fe081-4498-49ae-8dbb-a6068aadfb06","Type":"ContainerStarted","Data":"3ea3cee743d6c39820726922743e6c1ca138f7432d76efaed68c43f9765d8b9e"} Jan 29 09:04:09 crc kubenswrapper[4895]: I0129 09:04:09.107738 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:04:09 crc kubenswrapper[4895]: I0129 09:04:09.134370 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5290359540000003 podStartE2EDuration="9.134333726s" podCreationTimestamp="2026-01-29 09:04:00 +0000 UTC" firstStartedPulling="2026-01-29 09:04:01.963270811 +0000 UTC m=+1383.604778957" lastFinishedPulling="2026-01-29 09:04:08.568568583 +0000 UTC m=+1390.210076729" observedRunningTime="2026-01-29 09:04:09.131762967 +0000 UTC m=+1390.773271113" watchObservedRunningTime="2026-01-29 09:04:09.134333726 +0000 UTC m=+1390.775841872" Jan 29 09:04:10 crc kubenswrapper[4895]: I0129 09:04:10.127060 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f893b3e3-3833-4a94-ab55-951f600fdadd","Type":"ContainerStarted","Data":"b6fcea2c6c6fd7103c8378abc44ee6d95ea1c2ab761372512641e15d9b55af65"} Jan 29 09:04:10 crc kubenswrapper[4895]: I0129 09:04:10.127462 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f893b3e3-3833-4a94-ab55-951f600fdadd","Type":"ContainerStarted","Data":"6a4fa0659ebdaced312bd621c6df4394b93d8d703021dae5a8adcb2b2f554fbe"} Jan 29 09:04:11 crc kubenswrapper[4895]: I0129 09:04:11.147657 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f893b3e3-3833-4a94-ab55-951f600fdadd","Type":"ContainerStarted","Data":"d760613f9fccb739b8c60640da5fe625de60eeb90bcb14eebf1a77906c2996f6"} Jan 29 09:04:11 crc kubenswrapper[4895]: I0129 09:04:11.148043 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Jan 29 09:04:11 crc kubenswrapper[4895]: I0129 09:04:11.195623 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=76.974688359 podStartE2EDuration="2m9.195592212s" podCreationTimestamp="2026-01-29 09:02:02 +0000 UTC" firstStartedPulling="2026-01-29 09:02:08.126831762 +0000 UTC m=+1269.768339908" lastFinishedPulling="2026-01-29 09:03:00.347735615 +0000 UTC m=+1321.989243761" observedRunningTime="2026-01-29 09:04:11.190497806 +0000 UTC m=+1392.832005952" watchObservedRunningTime="2026-01-29 09:04:11.195592212 +0000 UTC m=+1392.837100358" Jan 29 09:04:12 crc kubenswrapper[4895]: I0129 09:04:12.230441 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Jan 29 09:04:14 crc kubenswrapper[4895]: I0129 09:04:14.129526 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-conductor-0" podUID="f893b3e3-3833-4a94-ab55-951f600fdadd" containerName="ironic-conductor" probeResult="failure" output=< Jan 29 09:04:14 crc kubenswrapper[4895]: ironic-conductor-0 is offline Jan 29 09:04:14 crc kubenswrapper[4895]: > Jan 29 09:04:14 crc kubenswrapper[4895]: I0129 09:04:14.491838 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.105014 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-6khzt"] Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.107353 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.112367 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.112884 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.143147 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6khzt"] Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.224298 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6khzt\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.224393 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt4cc\" (UniqueName: \"kubernetes.io/projected/f95b2d12-bd09-497c-84a2-b145f94a4818-kube-api-access-wt4cc\") pod \"nova-cell0-cell-mapping-6khzt\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.224476 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-config-data\") pod \"nova-cell0-cell-mapping-6khzt\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.224611 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-scripts\") pod \"nova-cell0-cell-mapping-6khzt\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.378986 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6khzt\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.379152 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt4cc\" (UniqueName: \"kubernetes.io/projected/f95b2d12-bd09-497c-84a2-b145f94a4818-kube-api-access-wt4cc\") pod \"nova-cell0-cell-mapping-6khzt\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.379300 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-config-data\") pod \"nova-cell0-cell-mapping-6khzt\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.379590 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-scripts\") pod \"nova-cell0-cell-mapping-6khzt\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.396457 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-scripts\") pod \"nova-cell0-cell-mapping-6khzt\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.453335 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6khzt\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.469888 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-config-data\") pod \"nova-cell0-cell-mapping-6khzt\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.475707 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt4cc\" (UniqueName: \"kubernetes.io/projected/f95b2d12-bd09-497c-84a2-b145f94a4818-kube-api-access-wt4cc\") pod \"nova-cell0-cell-mapping-6khzt\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.551161 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.553983 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.565346 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.569803 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.585105 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.587602 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.596844 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.601600 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.667042 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.677270 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.686948 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.719367 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3405b2be-d52c-4e7e-846e-8ae737452bae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3405b2be-d52c-4e7e-846e-8ae737452bae\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.719504 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " pod="openstack/nova-api-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.719579 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgmbj\" (UniqueName: \"kubernetes.io/projected/3405b2be-d52c-4e7e-846e-8ae737452bae-kube-api-access-xgmbj\") pod \"nova-cell1-novncproxy-0\" (UID: \"3405b2be-d52c-4e7e-846e-8ae737452bae\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.719637 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-config-data\") pod \"nova-api-0\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " pod="openstack/nova-api-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.719661 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l49gc\" (UniqueName: \"kubernetes.io/projected/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-kube-api-access-l49gc\") pod \"nova-api-0\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " pod="openstack/nova-api-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.719700 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-logs\") pod \"nova-api-0\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " pod="openstack/nova-api-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.719721 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3405b2be-d52c-4e7e-846e-8ae737452bae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3405b2be-d52c-4e7e-846e-8ae737452bae\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.726130 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.732364 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.741716 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.744448 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.748939 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.822647 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af89c708-dad6-461e-a740-7c2f948e1f8a-config-data\") pod \"nova-scheduler-0\" (UID: \"af89c708-dad6-461e-a740-7c2f948e1f8a\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.822742 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3405b2be-d52c-4e7e-846e-8ae737452bae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3405b2be-d52c-4e7e-846e-8ae737452bae\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.822808 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af89c708-dad6-461e-a740-7c2f948e1f8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"af89c708-dad6-461e-a740-7c2f948e1f8a\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.822854 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " pod="openstack/nova-api-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.822896 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-426hx\" (UniqueName: \"kubernetes.io/projected/340c0e40-f5fb-4526-8433-65490a96c71c-kube-api-access-426hx\") pod \"nova-metadata-0\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " pod="openstack/nova-metadata-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.822962 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgmbj\" (UniqueName: \"kubernetes.io/projected/3405b2be-d52c-4e7e-846e-8ae737452bae-kube-api-access-xgmbj\") pod \"nova-cell1-novncproxy-0\" (UID: \"3405b2be-d52c-4e7e-846e-8ae737452bae\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.822991 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340c0e40-f5fb-4526-8433-65490a96c71c-config-data\") pod \"nova-metadata-0\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " pod="openstack/nova-metadata-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.823047 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340c0e40-f5fb-4526-8433-65490a96c71c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " pod="openstack/nova-metadata-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.823081 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-config-data\") pod \"nova-api-0\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " pod="openstack/nova-api-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.823115 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l49gc\" (UniqueName: \"kubernetes.io/projected/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-kube-api-access-l49gc\") pod \"nova-api-0\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " pod="openstack/nova-api-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.823152 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm87v\" (UniqueName: \"kubernetes.io/projected/af89c708-dad6-461e-a740-7c2f948e1f8a-kube-api-access-tm87v\") pod \"nova-scheduler-0\" (UID: \"af89c708-dad6-461e-a740-7c2f948e1f8a\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.823181 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340c0e40-f5fb-4526-8433-65490a96c71c-logs\") pod \"nova-metadata-0\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " pod="openstack/nova-metadata-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.823207 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-logs\") pod \"nova-api-0\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " pod="openstack/nova-api-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.823226 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3405b2be-d52c-4e7e-846e-8ae737452bae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3405b2be-d52c-4e7e-846e-8ae737452bae\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.829643 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.830191 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-logs\") pod \"nova-api-0\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " pod="openstack/nova-api-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.840572 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3405b2be-d52c-4e7e-846e-8ae737452bae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3405b2be-d52c-4e7e-846e-8ae737452bae\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.843433 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3405b2be-d52c-4e7e-846e-8ae737452bae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3405b2be-d52c-4e7e-846e-8ae737452bae\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.846443 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-config-data\") pod \"nova-api-0\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " pod="openstack/nova-api-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.849408 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " pod="openstack/nova-api-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.865298 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgmbj\" (UniqueName: \"kubernetes.io/projected/3405b2be-d52c-4e7e-846e-8ae737452bae-kube-api-access-xgmbj\") pod \"nova-cell1-novncproxy-0\" (UID: \"3405b2be-d52c-4e7e-846e-8ae737452bae\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.871891 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l49gc\" (UniqueName: \"kubernetes.io/projected/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-kube-api-access-l49gc\") pod \"nova-api-0\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " pod="openstack/nova-api-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.897502 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-kfhbt"] Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.900287 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.927648 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-426hx\" (UniqueName: \"kubernetes.io/projected/340c0e40-f5fb-4526-8433-65490a96c71c-kube-api-access-426hx\") pod \"nova-metadata-0\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " pod="openstack/nova-metadata-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.927748 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340c0e40-f5fb-4526-8433-65490a96c71c-config-data\") pod \"nova-metadata-0\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " pod="openstack/nova-metadata-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.927816 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340c0e40-f5fb-4526-8433-65490a96c71c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " pod="openstack/nova-metadata-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.927874 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm87v\" (UniqueName: \"kubernetes.io/projected/af89c708-dad6-461e-a740-7c2f948e1f8a-kube-api-access-tm87v\") pod \"nova-scheduler-0\" (UID: \"af89c708-dad6-461e-a740-7c2f948e1f8a\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.927902 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340c0e40-f5fb-4526-8433-65490a96c71c-logs\") pod \"nova-metadata-0\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " pod="openstack/nova-metadata-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.928005 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af89c708-dad6-461e-a740-7c2f948e1f8a-config-data\") pod \"nova-scheduler-0\" (UID: \"af89c708-dad6-461e-a740-7c2f948e1f8a\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.928063 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af89c708-dad6-461e-a740-7c2f948e1f8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"af89c708-dad6-461e-a740-7c2f948e1f8a\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.932451 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-kfhbt"] Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.933032 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340c0e40-f5fb-4526-8433-65490a96c71c-logs\") pod \"nova-metadata-0\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " pod="openstack/nova-metadata-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.940581 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af89c708-dad6-461e-a740-7c2f948e1f8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"af89c708-dad6-461e-a740-7c2f948e1f8a\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.945570 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.949447 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af89c708-dad6-461e-a740-7c2f948e1f8a-config-data\") pod \"nova-scheduler-0\" (UID: \"af89c708-dad6-461e-a740-7c2f948e1f8a\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.951718 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340c0e40-f5fb-4526-8433-65490a96c71c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " pod="openstack/nova-metadata-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.964771 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340c0e40-f5fb-4526-8433-65490a96c71c-config-data\") pod \"nova-metadata-0\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " pod="openstack/nova-metadata-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.968480 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm87v\" (UniqueName: \"kubernetes.io/projected/af89c708-dad6-461e-a740-7c2f948e1f8a-kube-api-access-tm87v\") pod \"nova-scheduler-0\" (UID: \"af89c708-dad6-461e-a740-7c2f948e1f8a\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.974720 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-426hx\" (UniqueName: \"kubernetes.io/projected/340c0e40-f5fb-4526-8433-65490a96c71c-kube-api-access-426hx\") pod \"nova-metadata-0\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " pod="openstack/nova-metadata-0" Jan 29 09:04:15 crc kubenswrapper[4895]: I0129 09:04:15.994636 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.021635 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.021720 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.021788 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.023071 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2bf4fdb573e9460c60a2fe1e2302b28757eefe98ad1ae3c12a1c65609fd1bb38"} pod="openshift-machine-config-operator/machine-config-daemon-z82hk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.023148 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" containerID="cri-o://2bf4fdb573e9460c60a2fe1e2302b28757eefe98ad1ae3c12a1c65609fd1bb38" gracePeriod=600 Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.043797 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.044103 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-dns-svc\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.044228 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hstqz\" (UniqueName: \"kubernetes.io/projected/abb41055-685a-4b58-85e5-fa877703ac61-kube-api-access-hstqz\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.044302 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.044368 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-config\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.044502 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.045799 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.100082 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:04:16 crc kubenswrapper[4895]: E0129 09:04:16.121017 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4a4bd95_f02a_4617_9aa4_febfa6bee92b.slice/crio-conmon-2bf4fdb573e9460c60a2fe1e2302b28757eefe98ad1ae3c12a1c65609fd1bb38.scope\": RecentStats: unable to find data in memory cache]" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.148802 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hstqz\" (UniqueName: \"kubernetes.io/projected/abb41055-685a-4b58-85e5-fa877703ac61-kube-api-access-hstqz\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.148978 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.149044 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-config\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.149297 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.149329 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.149409 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-dns-svc\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.150679 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-dns-svc\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.151489 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.151676 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.152324 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-config\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.153202 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.182545 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hstqz\" (UniqueName: \"kubernetes.io/projected/abb41055-685a-4b58-85e5-fa877703ac61-kube-api-access-hstqz\") pod \"dnsmasq-dns-bccf8f775-kfhbt\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.258737 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.312171 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerID="2bf4fdb573e9460c60a2fe1e2302b28757eefe98ad1ae3c12a1c65609fd1bb38" exitCode=0 Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.312237 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerDied","Data":"2bf4fdb573e9460c60a2fe1e2302b28757eefe98ad1ae3c12a1c65609fd1bb38"} Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.312285 4895 scope.go:117] "RemoveContainer" containerID="ac19f0c558f19013faadf18c0d93d61660767ea0e756e78bbf7d902981654a13" Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.592655 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6khzt"] Jan 29 09:04:16 crc kubenswrapper[4895]: I0129 09:04:16.905446 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.019561 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.028336 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Jan 29 09:04:17 crc kubenswrapper[4895]: W0129 09:04:17.034102 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4163e0dc_95a3_4e5b_b5b7_a104a9a0afbf.slice/crio-43865fc48267ecacb5749069c02963215333d9cc8cf5b5b81a520e60e4640bcb WatchSource:0}: Error finding container 43865fc48267ecacb5749069c02963215333d9cc8cf5b5b81a520e60e4640bcb: Status 404 returned error can't find the container with id 43865fc48267ecacb5749069c02963215333d9cc8cf5b5b81a520e60e4640bcb Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.043901 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.362342 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.367270 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3405b2be-d52c-4e7e-846e-8ae737452bae","Type":"ContainerStarted","Data":"aff66b0b00d4c31c14972f0f51061b56c7a4f7e4688607f5c07ec94e76e1b800"} Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.373975 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf","Type":"ContainerStarted","Data":"43865fc48267ecacb5749069c02963215333d9cc8cf5b5b81a520e60e4640bcb"} Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.382874 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerStarted","Data":"cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6"} Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.421043 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6khzt" event={"ID":"f95b2d12-bd09-497c-84a2-b145f94a4818","Type":"ContainerStarted","Data":"c88a21c4b4e0ab0cc669142099f90dd90bc52f98af717769ff4991d45c619a28"} Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.421534 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6khzt" event={"ID":"f95b2d12-bd09-497c-84a2-b145f94a4818","Type":"ContainerStarted","Data":"5d347626b354c89189bd0ab09b328494aa81a111723a43d3b282c28703043fb3"} Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.455584 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.482320 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-6khzt" podStartSLOduration=2.482289198 podStartE2EDuration="2.482289198s" podCreationTimestamp="2026-01-29 09:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:04:17.44862214 +0000 UTC m=+1399.090130286" watchObservedRunningTime="2026-01-29 09:04:17.482289198 +0000 UTC m=+1399.123797344" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.563994 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-kfhbt"] Jan 29 09:04:17 crc kubenswrapper[4895]: W0129 09:04:17.593256 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabb41055_685a_4b58_85e5_fa877703ac61.slice/crio-7ed73a603896f3db1809a3d69f2f59e4d973d53d54f8fd47fda23d206eeef5c1 WatchSource:0}: Error finding container 7ed73a603896f3db1809a3d69f2f59e4d973d53d54f8fd47fda23d206eeef5c1: Status 404 returned error can't find the container with id 7ed73a603896f3db1809a3d69f2f59e4d973d53d54f8fd47fda23d206eeef5c1 Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.614521 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jx5qb"] Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.616511 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.620198 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.625575 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.637230 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jx5qb"] Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.720316 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jx5qb\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.722049 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-config-data\") pod \"nova-cell1-conductor-db-sync-jx5qb\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.722388 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-scripts\") pod \"nova-cell1-conductor-db-sync-jx5qb\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.726415 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8xrv\" (UniqueName: \"kubernetes.io/projected/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-kube-api-access-h8xrv\") pod \"nova-cell1-conductor-db-sync-jx5qb\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.828988 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-scripts\") pod \"nova-cell1-conductor-db-sync-jx5qb\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.829139 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8xrv\" (UniqueName: \"kubernetes.io/projected/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-kube-api-access-h8xrv\") pod \"nova-cell1-conductor-db-sync-jx5qb\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.829192 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jx5qb\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.829297 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-config-data\") pod \"nova-cell1-conductor-db-sync-jx5qb\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.844079 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jx5qb\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.844109 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-config-data\") pod \"nova-cell1-conductor-db-sync-jx5qb\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.844509 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-scripts\") pod \"nova-cell1-conductor-db-sync-jx5qb\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:17 crc kubenswrapper[4895]: I0129 09:04:17.851835 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8xrv\" (UniqueName: \"kubernetes.io/projected/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-kube-api-access-h8xrv\") pod \"nova-cell1-conductor-db-sync-jx5qb\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:18 crc kubenswrapper[4895]: I0129 09:04:18.079501 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:18 crc kubenswrapper[4895]: I0129 09:04:18.441169 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"340c0e40-f5fb-4526-8433-65490a96c71c","Type":"ContainerStarted","Data":"f89211422112ae768afb97db81fbe5b812bfbe6878e8e59fd2c6f237d0e386c6"} Jan 29 09:04:18 crc kubenswrapper[4895]: I0129 09:04:18.451985 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"af89c708-dad6-461e-a740-7c2f948e1f8a","Type":"ContainerStarted","Data":"d81af6e1b3d458f7694552f1533c3c36a689bf781179e9a108329c01bc5fa02c"} Jan 29 09:04:18 crc kubenswrapper[4895]: I0129 09:04:18.463130 4895 generic.go:334] "Generic (PLEG): container finished" podID="abb41055-685a-4b58-85e5-fa877703ac61" containerID="b6a409fb3d9294727707dc73535dbf537020a477050622bf1695f01f36fef7cd" exitCode=0 Jan 29 09:04:18 crc kubenswrapper[4895]: I0129 09:04:18.465089 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" event={"ID":"abb41055-685a-4b58-85e5-fa877703ac61","Type":"ContainerDied","Data":"b6a409fb3d9294727707dc73535dbf537020a477050622bf1695f01f36fef7cd"} Jan 29 09:04:18 crc kubenswrapper[4895]: I0129 09:04:18.465120 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" event={"ID":"abb41055-685a-4b58-85e5-fa877703ac61","Type":"ContainerStarted","Data":"7ed73a603896f3db1809a3d69f2f59e4d973d53d54f8fd47fda23d206eeef5c1"} Jan 29 09:04:18 crc kubenswrapper[4895]: I0129 09:04:18.673860 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jx5qb"] Jan 29 09:04:19 crc kubenswrapper[4895]: I0129 09:04:19.505393 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jx5qb" event={"ID":"e58f4f0b-0a2b-4f02-a61c-903e35516ce6","Type":"ContainerStarted","Data":"d742b21984901b203fca8f7fdcf5c7b4760154a4f75c083e4764eb60e0da9cf2"} Jan 29 09:04:19 crc kubenswrapper[4895]: I0129 09:04:19.550078 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:19 crc kubenswrapper[4895]: I0129 09:04:19.582575 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:04:20 crc kubenswrapper[4895]: I0129 09:04:20.535762 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" event={"ID":"abb41055-685a-4b58-85e5-fa877703ac61","Type":"ContainerStarted","Data":"0253525c28771cdac812e23fecad9c126e34a839e6e88c655182982ca71f4d69"} Jan 29 09:04:20 crc kubenswrapper[4895]: I0129 09:04:20.536546 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:20 crc kubenswrapper[4895]: I0129 09:04:20.562592 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" podStartSLOduration=5.5625618580000005 podStartE2EDuration="5.562561858s" podCreationTimestamp="2026-01-29 09:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:04:20.559675611 +0000 UTC m=+1402.201183767" watchObservedRunningTime="2026-01-29 09:04:20.562561858 +0000 UTC m=+1402.204069994" Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.568371 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jx5qb" event={"ID":"e58f4f0b-0a2b-4f02-a61c-903e35516ce6","Type":"ContainerStarted","Data":"fa05d8dda4fde7bc10b7f544d4c1819066a36289672d37d6b23c288161874ea2"} Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.574288 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"340c0e40-f5fb-4526-8433-65490a96c71c","Type":"ContainerStarted","Data":"406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240"} Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.574389 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"340c0e40-f5fb-4526-8433-65490a96c71c","Type":"ContainerStarted","Data":"1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436"} Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.574594 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="340c0e40-f5fb-4526-8433-65490a96c71c" containerName="nova-metadata-log" containerID="cri-o://1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436" gracePeriod=30 Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.574749 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="340c0e40-f5fb-4526-8433-65490a96c71c" containerName="nova-metadata-metadata" containerID="cri-o://406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240" gracePeriod=30 Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.580980 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf","Type":"ContainerStarted","Data":"23abee1ddf4e4ef6becf729c32788cbd894404eb9f76acb2316d78b764c52570"} Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.581038 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf","Type":"ContainerStarted","Data":"95bf2b5f94e757e7cbb11e38bd44aa8a3ea96bc91a70f56ac2ce4d1492b25a9d"} Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.584357 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"af89c708-dad6-461e-a740-7c2f948e1f8a","Type":"ContainerStarted","Data":"aa7e63fea9a6ca2c2996b6ff99a2ab7084740ee334741a214e3c92a8f1b9b46c"} Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.591882 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3405b2be-d52c-4e7e-846e-8ae737452bae","Type":"ContainerStarted","Data":"c683b0b155bae1b66244ed1ea58e75157822d2dbae1ca853b5f15574f4ba1d2e"} Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.592213 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3405b2be-d52c-4e7e-846e-8ae737452bae" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c683b0b155bae1b66244ed1ea58e75157822d2dbae1ca853b5f15574f4ba1d2e" gracePeriod=30 Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.605184 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-jx5qb" podStartSLOduration=5.605149396 podStartE2EDuration="5.605149396s" podCreationTimestamp="2026-01-29 09:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:04:22.591033009 +0000 UTC m=+1404.232541155" watchObservedRunningTime="2026-01-29 09:04:22.605149396 +0000 UTC m=+1404.246657542" Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.620573 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.664390585 podStartE2EDuration="7.620539047s" podCreationTimestamp="2026-01-29 09:04:15 +0000 UTC" firstStartedPulling="2026-01-29 09:04:17.399148298 +0000 UTC m=+1399.040656444" lastFinishedPulling="2026-01-29 09:04:21.35529676 +0000 UTC m=+1402.996804906" observedRunningTime="2026-01-29 09:04:22.617415012 +0000 UTC m=+1404.258923158" watchObservedRunningTime="2026-01-29 09:04:22.620539047 +0000 UTC m=+1404.262047203" Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.649738 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.786575268 podStartE2EDuration="7.649706796s" podCreationTimestamp="2026-01-29 09:04:15 +0000 UTC" firstStartedPulling="2026-01-29 09:04:17.48645665 +0000 UTC m=+1399.127964786" lastFinishedPulling="2026-01-29 09:04:21.349588168 +0000 UTC m=+1402.991096314" observedRunningTime="2026-01-29 09:04:22.639270896 +0000 UTC m=+1404.280779042" watchObservedRunningTime="2026-01-29 09:04:22.649706796 +0000 UTC m=+1404.291214942" Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.707441 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.403518122 podStartE2EDuration="7.707410996s" podCreationTimestamp="2026-01-29 09:04:15 +0000 UTC" firstStartedPulling="2026-01-29 09:04:17.050139502 +0000 UTC m=+1398.691647648" lastFinishedPulling="2026-01-29 09:04:21.354032376 +0000 UTC m=+1402.995540522" observedRunningTime="2026-01-29 09:04:22.680746144 +0000 UTC m=+1404.322254300" watchObservedRunningTime="2026-01-29 09:04:22.707410996 +0000 UTC m=+1404.348919142" Jan 29 09:04:22 crc kubenswrapper[4895]: I0129 09:04:22.744863 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.373162311 podStartE2EDuration="7.744820984s" podCreationTimestamp="2026-01-29 09:04:15 +0000 UTC" firstStartedPulling="2026-01-29 09:04:16.982406584 +0000 UTC m=+1398.623914730" lastFinishedPulling="2026-01-29 09:04:21.354065257 +0000 UTC m=+1402.995573403" observedRunningTime="2026-01-29 09:04:22.705594917 +0000 UTC m=+1404.347103063" watchObservedRunningTime="2026-01-29 09:04:22.744820984 +0000 UTC m=+1404.386329130" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.380648 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.493272 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340c0e40-f5fb-4526-8433-65490a96c71c-logs\") pod \"340c0e40-f5fb-4526-8433-65490a96c71c\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.493348 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340c0e40-f5fb-4526-8433-65490a96c71c-combined-ca-bundle\") pod \"340c0e40-f5fb-4526-8433-65490a96c71c\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.493381 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-426hx\" (UniqueName: \"kubernetes.io/projected/340c0e40-f5fb-4526-8433-65490a96c71c-kube-api-access-426hx\") pod \"340c0e40-f5fb-4526-8433-65490a96c71c\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.493581 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340c0e40-f5fb-4526-8433-65490a96c71c-config-data\") pod \"340c0e40-f5fb-4526-8433-65490a96c71c\" (UID: \"340c0e40-f5fb-4526-8433-65490a96c71c\") " Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.493719 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/340c0e40-f5fb-4526-8433-65490a96c71c-logs" (OuterVolumeSpecName: "logs") pod "340c0e40-f5fb-4526-8433-65490a96c71c" (UID: "340c0e40-f5fb-4526-8433-65490a96c71c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.494280 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340c0e40-f5fb-4526-8433-65490a96c71c-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.503473 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/340c0e40-f5fb-4526-8433-65490a96c71c-kube-api-access-426hx" (OuterVolumeSpecName: "kube-api-access-426hx") pod "340c0e40-f5fb-4526-8433-65490a96c71c" (UID: "340c0e40-f5fb-4526-8433-65490a96c71c"). InnerVolumeSpecName "kube-api-access-426hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.530265 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/340c0e40-f5fb-4526-8433-65490a96c71c-config-data" (OuterVolumeSpecName: "config-data") pod "340c0e40-f5fb-4526-8433-65490a96c71c" (UID: "340c0e40-f5fb-4526-8433-65490a96c71c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.533998 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/340c0e40-f5fb-4526-8433-65490a96c71c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "340c0e40-f5fb-4526-8433-65490a96c71c" (UID: "340c0e40-f5fb-4526-8433-65490a96c71c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.601384 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340c0e40-f5fb-4526-8433-65490a96c71c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.601436 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-426hx\" (UniqueName: \"kubernetes.io/projected/340c0e40-f5fb-4526-8433-65490a96c71c-kube-api-access-426hx\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.601452 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340c0e40-f5fb-4526-8433-65490a96c71c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.606825 4895 generic.go:334] "Generic (PLEG): container finished" podID="340c0e40-f5fb-4526-8433-65490a96c71c" containerID="406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240" exitCode=0 Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.606858 4895 generic.go:334] "Generic (PLEG): container finished" podID="340c0e40-f5fb-4526-8433-65490a96c71c" containerID="1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436" exitCode=143 Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.607809 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.612195 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"340c0e40-f5fb-4526-8433-65490a96c71c","Type":"ContainerDied","Data":"406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240"} Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.612307 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"340c0e40-f5fb-4526-8433-65490a96c71c","Type":"ContainerDied","Data":"1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436"} Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.612320 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"340c0e40-f5fb-4526-8433-65490a96c71c","Type":"ContainerDied","Data":"f89211422112ae768afb97db81fbe5b812bfbe6878e8e59fd2c6f237d0e386c6"} Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.612348 4895 scope.go:117] "RemoveContainer" containerID="406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.658517 4895 scope.go:117] "RemoveContainer" containerID="1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.689567 4895 scope.go:117] "RemoveContainer" containerID="406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240" Jan 29 09:04:23 crc kubenswrapper[4895]: E0129 09:04:23.694013 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240\": container with ID starting with 406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240 not found: ID does not exist" containerID="406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.694063 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240"} err="failed to get container status \"406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240\": rpc error: code = NotFound desc = could not find container \"406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240\": container with ID starting with 406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240 not found: ID does not exist" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.694093 4895 scope.go:117] "RemoveContainer" containerID="1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.694202 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:23 crc kubenswrapper[4895]: E0129 09:04:23.704052 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436\": container with ID starting with 1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436 not found: ID does not exist" containerID="1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.704113 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436"} err="failed to get container status \"1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436\": rpc error: code = NotFound desc = could not find container \"1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436\": container with ID starting with 1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436 not found: ID does not exist" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.704152 4895 scope.go:117] "RemoveContainer" containerID="406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.706259 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240"} err="failed to get container status \"406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240\": rpc error: code = NotFound desc = could not find container \"406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240\": container with ID starting with 406c8cb450529fb76604ec70fc745e17b596801366cf961ee7ea8d11d7056240 not found: ID does not exist" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.706314 4895 scope.go:117] "RemoveContainer" containerID="1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.707118 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436"} err="failed to get container status \"1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436\": rpc error: code = NotFound desc = could not find container \"1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436\": container with ID starting with 1a6714bc79f02b7c9a0d9971afa6730993f2659265f706c8e873dc27026a3436 not found: ID does not exist" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.734139 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.760058 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:23 crc kubenswrapper[4895]: E0129 09:04:23.760661 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340c0e40-f5fb-4526-8433-65490a96c71c" containerName="nova-metadata-metadata" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.760685 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="340c0e40-f5fb-4526-8433-65490a96c71c" containerName="nova-metadata-metadata" Jan 29 09:04:23 crc kubenswrapper[4895]: E0129 09:04:23.760698 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340c0e40-f5fb-4526-8433-65490a96c71c" containerName="nova-metadata-log" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.760706 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="340c0e40-f5fb-4526-8433-65490a96c71c" containerName="nova-metadata-log" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.760988 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="340c0e40-f5fb-4526-8433-65490a96c71c" containerName="nova-metadata-log" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.761024 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="340c0e40-f5fb-4526-8433-65490a96c71c" containerName="nova-metadata-metadata" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.762627 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.769046 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.769163 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.782955 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.806435 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49753c6-a374-4b5d-9404-71c86559e71b-logs\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.806507 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmbvp\" (UniqueName: \"kubernetes.io/projected/d49753c6-a374-4b5d-9404-71c86559e71b-kube-api-access-nmbvp\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.806539 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.806694 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.806728 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-config-data\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.908853 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49753c6-a374-4b5d-9404-71c86559e71b-logs\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.909338 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmbvp\" (UniqueName: \"kubernetes.io/projected/d49753c6-a374-4b5d-9404-71c86559e71b-kube-api-access-nmbvp\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.909386 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.909451 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49753c6-a374-4b5d-9404-71c86559e71b-logs\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.909656 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.909767 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-config-data\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.916963 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.923006 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.923855 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-config-data\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:23 crc kubenswrapper[4895]: I0129 09:04:23.933192 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmbvp\" (UniqueName: \"kubernetes.io/projected/d49753c6-a374-4b5d-9404-71c86559e71b-kube-api-access-nmbvp\") pod \"nova-metadata-0\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " pod="openstack/nova-metadata-0" Jan 29 09:04:24 crc kubenswrapper[4895]: I0129 09:04:24.093465 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:04:24 crc kubenswrapper[4895]: I0129 09:04:24.698088 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:25 crc kubenswrapper[4895]: I0129 09:04:25.227162 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="340c0e40-f5fb-4526-8433-65490a96c71c" path="/var/lib/kubelet/pods/340c0e40-f5fb-4526-8433-65490a96c71c/volumes" Jan 29 09:04:25 crc kubenswrapper[4895]: I0129 09:04:25.632719 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d49753c6-a374-4b5d-9404-71c86559e71b","Type":"ContainerStarted","Data":"5341d76479aba4ced0010f5b26810e3a782acdbbdb0b8c183dc0c629377544c3"} Jan 29 09:04:25 crc kubenswrapper[4895]: I0129 09:04:25.633176 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d49753c6-a374-4b5d-9404-71c86559e71b","Type":"ContainerStarted","Data":"9ba6384af13ac563d9fd223e1c81e001e11c940d0e85265a945e5fcdad3c613e"} Jan 29 09:04:25 crc kubenswrapper[4895]: I0129 09:04:25.946994 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:04:25 crc kubenswrapper[4895]: I0129 09:04:25.947982 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:04:25 crc kubenswrapper[4895]: I0129 09:04:25.995674 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:26 crc kubenswrapper[4895]: I0129 09:04:26.101707 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 09:04:26 crc kubenswrapper[4895]: I0129 09:04:26.101779 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 09:04:26 crc kubenswrapper[4895]: I0129 09:04:26.134871 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 09:04:26 crc kubenswrapper[4895]: I0129 09:04:26.261404 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:04:26 crc kubenswrapper[4895]: I0129 09:04:26.344302 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-d7mhv"] Jan 29 09:04:26 crc kubenswrapper[4895]: I0129 09:04:26.345927 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" podUID="ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" containerName="dnsmasq-dns" containerID="cri-o://1054f5738a4e556614c0471c9472a322dc5ecd1d3b74e7ea0fe5f879934a9adf" gracePeriod=10 Jan 29 09:04:26 crc kubenswrapper[4895]: E0129 09:04:26.516455 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac55a7b1_fcd1_4d76_964b_7b0f1c2b7e57.slice/crio-1054f5738a4e556614c0471c9472a322dc5ecd1d3b74e7ea0fe5f879934a9adf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac55a7b1_fcd1_4d76_964b_7b0f1c2b7e57.slice/crio-conmon-1054f5738a4e556614c0471c9472a322dc5ecd1d3b74e7ea0fe5f879934a9adf.scope\": RecentStats: unable to find data in memory cache]" Jan 29 09:04:26 crc kubenswrapper[4895]: I0129 09:04:26.650816 4895 generic.go:334] "Generic (PLEG): container finished" podID="ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" containerID="1054f5738a4e556614c0471c9472a322dc5ecd1d3b74e7ea0fe5f879934a9adf" exitCode=0 Jan 29 09:04:26 crc kubenswrapper[4895]: I0129 09:04:26.650955 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" event={"ID":"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57","Type":"ContainerDied","Data":"1054f5738a4e556614c0471c9472a322dc5ecd1d3b74e7ea0fe5f879934a9adf"} Jan 29 09:04:26 crc kubenswrapper[4895]: I0129 09:04:26.654696 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d49753c6-a374-4b5d-9404-71c86559e71b","Type":"ContainerStarted","Data":"e0c43323de8fe4fa4495db4d692a52ee00d1dc8097a6caaacb6ef5f71b274326"} Jan 29 09:04:26 crc kubenswrapper[4895]: I0129 09:04:26.690266 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.690227818 podStartE2EDuration="3.690227818s" podCreationTimestamp="2026-01-29 09:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:04:26.690176487 +0000 UTC m=+1408.331684623" watchObservedRunningTime="2026-01-29 09:04:26.690227818 +0000 UTC m=+1408.331735964" Jan 29 09:04:26 crc kubenswrapper[4895]: I0129 09:04:26.707856 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.035422 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.035573 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.089059 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.240783 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-config\") pod \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.241035 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-dns-svc\") pod \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.241202 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-ovsdbserver-nb\") pod \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.241402 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-dns-swift-storage-0\") pod \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.241479 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-ovsdbserver-sb\") pod \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.241562 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8stz\" (UniqueName: \"kubernetes.io/projected/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-kube-api-access-b8stz\") pod \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\" (UID: \"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57\") " Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.253257 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-kube-api-access-b8stz" (OuterVolumeSpecName: "kube-api-access-b8stz") pod "ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" (UID: "ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57"). InnerVolumeSpecName "kube-api-access-b8stz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.316810 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" (UID: "ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.344743 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8stz\" (UniqueName: \"kubernetes.io/projected/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-kube-api-access-b8stz\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.344783 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.362076 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-config" (OuterVolumeSpecName: "config") pod "ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" (UID: "ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.370573 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" (UID: "ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.371213 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" (UID: "ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.374418 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" (UID: "ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.447303 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.447764 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.447777 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.447786 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.670138 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" event={"ID":"ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57","Type":"ContainerDied","Data":"43d33b0340083bcac9a375ff1343dd4542366d54c75e9440bb50db08f55967f3"} Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.670227 4895 scope.go:117] "RemoveContainer" containerID="1054f5738a4e556614c0471c9472a322dc5ecd1d3b74e7ea0fe5f879934a9adf" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.670487 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-d7mhv" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.711821 4895 scope.go:117] "RemoveContainer" containerID="6f6d94f9b72f1f6de06eca4cc7f25490088c9eb005917235e4c5387465c67cfb" Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.727220 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-d7mhv"] Jan 29 09:04:27 crc kubenswrapper[4895]: I0129 09:04:27.744040 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-d7mhv"] Jan 29 09:04:29 crc kubenswrapper[4895]: I0129 09:04:29.094341 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:04:29 crc kubenswrapper[4895]: I0129 09:04:29.095060 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:04:29 crc kubenswrapper[4895]: I0129 09:04:29.253595 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" path="/var/lib/kubelet/pods/ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57/volumes" Jan 29 09:04:29 crc kubenswrapper[4895]: I0129 09:04:29.697366 4895 generic.go:334] "Generic (PLEG): container finished" podID="f95b2d12-bd09-497c-84a2-b145f94a4818" containerID="c88a21c4b4e0ab0cc669142099f90dd90bc52f98af717769ff4991d45c619a28" exitCode=0 Jan 29 09:04:29 crc kubenswrapper[4895]: I0129 09:04:29.697907 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6khzt" event={"ID":"f95b2d12-bd09-497c-84a2-b145f94a4818","Type":"ContainerDied","Data":"c88a21c4b4e0ab0cc669142099f90dd90bc52f98af717769ff4991d45c619a28"} Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.132889 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.250260 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt4cc\" (UniqueName: \"kubernetes.io/projected/f95b2d12-bd09-497c-84a2-b145f94a4818-kube-api-access-wt4cc\") pod \"f95b2d12-bd09-497c-84a2-b145f94a4818\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.250439 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-config-data\") pod \"f95b2d12-bd09-497c-84a2-b145f94a4818\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.250558 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-scripts\") pod \"f95b2d12-bd09-497c-84a2-b145f94a4818\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.250716 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-combined-ca-bundle\") pod \"f95b2d12-bd09-497c-84a2-b145f94a4818\" (UID: \"f95b2d12-bd09-497c-84a2-b145f94a4818\") " Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.259419 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f95b2d12-bd09-497c-84a2-b145f94a4818-kube-api-access-wt4cc" (OuterVolumeSpecName: "kube-api-access-wt4cc") pod "f95b2d12-bd09-497c-84a2-b145f94a4818" (UID: "f95b2d12-bd09-497c-84a2-b145f94a4818"). InnerVolumeSpecName "kube-api-access-wt4cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.262432 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-scripts" (OuterVolumeSpecName: "scripts") pod "f95b2d12-bd09-497c-84a2-b145f94a4818" (UID: "f95b2d12-bd09-497c-84a2-b145f94a4818"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.284995 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-config-data" (OuterVolumeSpecName: "config-data") pod "f95b2d12-bd09-497c-84a2-b145f94a4818" (UID: "f95b2d12-bd09-497c-84a2-b145f94a4818"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.294937 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f95b2d12-bd09-497c-84a2-b145f94a4818" (UID: "f95b2d12-bd09-497c-84a2-b145f94a4818"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.354150 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt4cc\" (UniqueName: \"kubernetes.io/projected/f95b2d12-bd09-497c-84a2-b145f94a4818-kube-api-access-wt4cc\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.354196 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.354206 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.354214 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95b2d12-bd09-497c-84a2-b145f94a4818-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.441784 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.720116 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6khzt" event={"ID":"f95b2d12-bd09-497c-84a2-b145f94a4818","Type":"ContainerDied","Data":"5d347626b354c89189bd0ab09b328494aa81a111723a43d3b282c28703043fb3"} Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.720169 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d347626b354c89189bd0ab09b328494aa81a111723a43d3b282c28703043fb3" Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.720184 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6khzt" Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.925888 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.927866 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" containerName="nova-api-log" containerID="cri-o://95bf2b5f94e757e7cbb11e38bd44aa8a3ea96bc91a70f56ac2ce4d1492b25a9d" gracePeriod=30 Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.928085 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" containerName="nova-api-api" containerID="cri-o://23abee1ddf4e4ef6becf729c32788cbd894404eb9f76acb2316d78b764c52570" gracePeriod=30 Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.940717 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:04:31 crc kubenswrapper[4895]: I0129 09:04:31.941291 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="af89c708-dad6-461e-a740-7c2f948e1f8a" containerName="nova-scheduler-scheduler" containerID="cri-o://aa7e63fea9a6ca2c2996b6ff99a2ab7084740ee334741a214e3c92a8f1b9b46c" gracePeriod=30 Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.098056 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.098409 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d49753c6-a374-4b5d-9404-71c86559e71b" containerName="nova-metadata-log" containerID="cri-o://5341d76479aba4ced0010f5b26810e3a782acdbbdb0b8c183dc0c629377544c3" gracePeriod=30 Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.098523 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d49753c6-a374-4b5d-9404-71c86559e71b" containerName="nova-metadata-metadata" containerID="cri-o://e0c43323de8fe4fa4495db4d692a52ee00d1dc8097a6caaacb6ef5f71b274326" gracePeriod=30 Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.736832 4895 generic.go:334] "Generic (PLEG): container finished" podID="4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" containerID="95bf2b5f94e757e7cbb11e38bd44aa8a3ea96bc91a70f56ac2ce4d1492b25a9d" exitCode=143 Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.736930 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf","Type":"ContainerDied","Data":"95bf2b5f94e757e7cbb11e38bd44aa8a3ea96bc91a70f56ac2ce4d1492b25a9d"} Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.741801 4895 generic.go:334] "Generic (PLEG): container finished" podID="e58f4f0b-0a2b-4f02-a61c-903e35516ce6" containerID="fa05d8dda4fde7bc10b7f544d4c1819066a36289672d37d6b23c288161874ea2" exitCode=0 Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.741962 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jx5qb" event={"ID":"e58f4f0b-0a2b-4f02-a61c-903e35516ce6","Type":"ContainerDied","Data":"fa05d8dda4fde7bc10b7f544d4c1819066a36289672d37d6b23c288161874ea2"} Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.744235 4895 generic.go:334] "Generic (PLEG): container finished" podID="d49753c6-a374-4b5d-9404-71c86559e71b" containerID="e0c43323de8fe4fa4495db4d692a52ee00d1dc8097a6caaacb6ef5f71b274326" exitCode=0 Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.744263 4895 generic.go:334] "Generic (PLEG): container finished" podID="d49753c6-a374-4b5d-9404-71c86559e71b" containerID="5341d76479aba4ced0010f5b26810e3a782acdbbdb0b8c183dc0c629377544c3" exitCode=143 Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.744311 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d49753c6-a374-4b5d-9404-71c86559e71b","Type":"ContainerDied","Data":"e0c43323de8fe4fa4495db4d692a52ee00d1dc8097a6caaacb6ef5f71b274326"} Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.744367 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d49753c6-a374-4b5d-9404-71c86559e71b","Type":"ContainerDied","Data":"5341d76479aba4ced0010f5b26810e3a782acdbbdb0b8c183dc0c629377544c3"} Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.744381 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d49753c6-a374-4b5d-9404-71c86559e71b","Type":"ContainerDied","Data":"9ba6384af13ac563d9fd223e1c81e001e11c940d0e85265a945e5fcdad3c613e"} Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.744396 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ba6384af13ac563d9fd223e1c81e001e11c940d0e85265a945e5fcdad3c613e" Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.776109 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.891168 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-combined-ca-bundle\") pod \"d49753c6-a374-4b5d-9404-71c86559e71b\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.891263 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-nova-metadata-tls-certs\") pod \"d49753c6-a374-4b5d-9404-71c86559e71b\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.891435 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49753c6-a374-4b5d-9404-71c86559e71b-logs\") pod \"d49753c6-a374-4b5d-9404-71c86559e71b\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.891465 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-config-data\") pod \"d49753c6-a374-4b5d-9404-71c86559e71b\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.891702 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmbvp\" (UniqueName: \"kubernetes.io/projected/d49753c6-a374-4b5d-9404-71c86559e71b-kube-api-access-nmbvp\") pod \"d49753c6-a374-4b5d-9404-71c86559e71b\" (UID: \"d49753c6-a374-4b5d-9404-71c86559e71b\") " Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.892024 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49753c6-a374-4b5d-9404-71c86559e71b-logs" (OuterVolumeSpecName: "logs") pod "d49753c6-a374-4b5d-9404-71c86559e71b" (UID: "d49753c6-a374-4b5d-9404-71c86559e71b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.892381 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49753c6-a374-4b5d-9404-71c86559e71b-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.899612 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49753c6-a374-4b5d-9404-71c86559e71b-kube-api-access-nmbvp" (OuterVolumeSpecName: "kube-api-access-nmbvp") pod "d49753c6-a374-4b5d-9404-71c86559e71b" (UID: "d49753c6-a374-4b5d-9404-71c86559e71b"). InnerVolumeSpecName "kube-api-access-nmbvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.922790 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-config-data" (OuterVolumeSpecName: "config-data") pod "d49753c6-a374-4b5d-9404-71c86559e71b" (UID: "d49753c6-a374-4b5d-9404-71c86559e71b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.924689 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d49753c6-a374-4b5d-9404-71c86559e71b" (UID: "d49753c6-a374-4b5d-9404-71c86559e71b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.955681 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d49753c6-a374-4b5d-9404-71c86559e71b" (UID: "d49753c6-a374-4b5d-9404-71c86559e71b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.999751 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.999801 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmbvp\" (UniqueName: \"kubernetes.io/projected/d49753c6-a374-4b5d-9404-71c86559e71b-kube-api-access-nmbvp\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.999818 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:32 crc kubenswrapper[4895]: I0129 09:04:32.999827 4895 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49753c6-a374-4b5d-9404-71c86559e71b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.755837 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.796488 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.810692 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.824427 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:33 crc kubenswrapper[4895]: E0129 09:04:33.825186 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49753c6-a374-4b5d-9404-71c86559e71b" containerName="nova-metadata-metadata" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.825209 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49753c6-a374-4b5d-9404-71c86559e71b" containerName="nova-metadata-metadata" Jan 29 09:04:33 crc kubenswrapper[4895]: E0129 09:04:33.825225 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" containerName="init" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.825232 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" containerName="init" Jan 29 09:04:33 crc kubenswrapper[4895]: E0129 09:04:33.825250 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" containerName="dnsmasq-dns" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.825257 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" containerName="dnsmasq-dns" Jan 29 09:04:33 crc kubenswrapper[4895]: E0129 09:04:33.825277 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49753c6-a374-4b5d-9404-71c86559e71b" containerName="nova-metadata-log" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.825283 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49753c6-a374-4b5d-9404-71c86559e71b" containerName="nova-metadata-log" Jan 29 09:04:33 crc kubenswrapper[4895]: E0129 09:04:33.825302 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f95b2d12-bd09-497c-84a2-b145f94a4818" containerName="nova-manage" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.825307 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f95b2d12-bd09-497c-84a2-b145f94a4818" containerName="nova-manage" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.825503 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f95b2d12-bd09-497c-84a2-b145f94a4818" containerName="nova-manage" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.825520 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49753c6-a374-4b5d-9404-71c86559e71b" containerName="nova-metadata-log" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.825537 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac55a7b1-fcd1-4d76-964b-7b0f1c2b7e57" containerName="dnsmasq-dns" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.825553 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49753c6-a374-4b5d-9404-71c86559e71b" containerName="nova-metadata-metadata" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.826838 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.830056 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.830366 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.835573 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.926841 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.926938 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-logs\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.926968 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.927033 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-config-data\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:33 crc kubenswrapper[4895]: I0129 09:04:33.927063 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7wvh\" (UniqueName: \"kubernetes.io/projected/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-kube-api-access-h7wvh\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.029384 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-logs\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.029460 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.029590 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-config-data\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.029642 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7wvh\" (UniqueName: \"kubernetes.io/projected/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-kube-api-access-h7wvh\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.029776 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.032651 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-logs\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.038662 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.038830 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.039774 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-config-data\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.053464 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7wvh\" (UniqueName: \"kubernetes.io/projected/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-kube-api-access-h7wvh\") pod \"nova-metadata-0\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " pod="openstack/nova-metadata-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.231199 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.329503 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.449756 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-config-data\") pod \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.449892 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8xrv\" (UniqueName: \"kubernetes.io/projected/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-kube-api-access-h8xrv\") pod \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.449985 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-combined-ca-bundle\") pod \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.450040 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-scripts\") pod \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\" (UID: \"e58f4f0b-0a2b-4f02-a61c-903e35516ce6\") " Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.458004 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-scripts" (OuterVolumeSpecName: "scripts") pod "e58f4f0b-0a2b-4f02-a61c-903e35516ce6" (UID: "e58f4f0b-0a2b-4f02-a61c-903e35516ce6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.482055 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-kube-api-access-h8xrv" (OuterVolumeSpecName: "kube-api-access-h8xrv") pod "e58f4f0b-0a2b-4f02-a61c-903e35516ce6" (UID: "e58f4f0b-0a2b-4f02-a61c-903e35516ce6"). InnerVolumeSpecName "kube-api-access-h8xrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.510627 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e58f4f0b-0a2b-4f02-a61c-903e35516ce6" (UID: "e58f4f0b-0a2b-4f02-a61c-903e35516ce6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.518150 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-config-data" (OuterVolumeSpecName: "config-data") pod "e58f4f0b-0a2b-4f02-a61c-903e35516ce6" (UID: "e58f4f0b-0a2b-4f02-a61c-903e35516ce6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.555601 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.555683 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.555703 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.555715 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8xrv\" (UniqueName: \"kubernetes.io/projected/e58f4f0b-0a2b-4f02-a61c-903e35516ce6-kube-api-access-h8xrv\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.768802 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jx5qb" event={"ID":"e58f4f0b-0a2b-4f02-a61c-903e35516ce6","Type":"ContainerDied","Data":"d742b21984901b203fca8f7fdcf5c7b4760154a4f75c083e4764eb60e0da9cf2"} Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.768857 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d742b21984901b203fca8f7fdcf5c7b4760154a4f75c083e4764eb60e0da9cf2" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.768898 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jx5qb" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.859561 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.878589 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:04:34 crc kubenswrapper[4895]: E0129 09:04:34.879211 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e58f4f0b-0a2b-4f02-a61c-903e35516ce6" containerName="nova-cell1-conductor-db-sync" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.879237 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e58f4f0b-0a2b-4f02-a61c-903e35516ce6" containerName="nova-cell1-conductor-db-sync" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.879485 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e58f4f0b-0a2b-4f02-a61c-903e35516ce6" containerName="nova-cell1-conductor-db-sync" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.882584 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.892569 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.913288 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.965106 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4653a20c-bef0-463a-962d-f1f17b2011e3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4653a20c-bef0-463a-962d-f1f17b2011e3\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.965201 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27hbj\" (UniqueName: \"kubernetes.io/projected/4653a20c-bef0-463a-962d-f1f17b2011e3-kube-api-access-27hbj\") pod \"nova-cell1-conductor-0\" (UID: \"4653a20c-bef0-463a-962d-f1f17b2011e3\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:04:34 crc kubenswrapper[4895]: I0129 09:04:34.965299 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4653a20c-bef0-463a-962d-f1f17b2011e3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4653a20c-bef0-463a-962d-f1f17b2011e3\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.070586 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27hbj\" (UniqueName: \"kubernetes.io/projected/4653a20c-bef0-463a-962d-f1f17b2011e3-kube-api-access-27hbj\") pod \"nova-cell1-conductor-0\" (UID: \"4653a20c-bef0-463a-962d-f1f17b2011e3\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.071094 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4653a20c-bef0-463a-962d-f1f17b2011e3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4653a20c-bef0-463a-962d-f1f17b2011e3\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.072377 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4653a20c-bef0-463a-962d-f1f17b2011e3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4653a20c-bef0-463a-962d-f1f17b2011e3\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.092892 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4653a20c-bef0-463a-962d-f1f17b2011e3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4653a20c-bef0-463a-962d-f1f17b2011e3\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.092973 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4653a20c-bef0-463a-962d-f1f17b2011e3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4653a20c-bef0-463a-962d-f1f17b2011e3\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.096725 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27hbj\" (UniqueName: \"kubernetes.io/projected/4653a20c-bef0-463a-962d-f1f17b2011e3-kube-api-access-27hbj\") pod \"nova-cell1-conductor-0\" (UID: \"4653a20c-bef0-463a-962d-f1f17b2011e3\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.204046 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.248254 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49753c6-a374-4b5d-9404-71c86559e71b" path="/var/lib/kubelet/pods/d49753c6-a374-4b5d-9404-71c86559e71b/volumes" Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.839698 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1a417ac6-d7a5-46e9-a456-f0a50beaa91d","Type":"ContainerStarted","Data":"c8585f99646178a5bc12eb8475cad3bf5b6d88577af11d1e2b825d526855538b"} Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.840248 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1a417ac6-d7a5-46e9-a456-f0a50beaa91d","Type":"ContainerStarted","Data":"bbe954eac31c0db31f78602e8a9703bad2df636dd14c32908bc60d2c40ff5614"} Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.876939 4895 generic.go:334] "Generic (PLEG): container finished" podID="4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" containerID="23abee1ddf4e4ef6becf729c32788cbd894404eb9f76acb2316d78b764c52570" exitCode=0 Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.877068 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf","Type":"ContainerDied","Data":"23abee1ddf4e4ef6becf729c32788cbd894404eb9f76acb2316d78b764c52570"} Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.899641 4895 generic.go:334] "Generic (PLEG): container finished" podID="af89c708-dad6-461e-a740-7c2f948e1f8a" containerID="aa7e63fea9a6ca2c2996b6ff99a2ab7084740ee334741a214e3c92a8f1b9b46c" exitCode=0 Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.899901 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"af89c708-dad6-461e-a740-7c2f948e1f8a","Type":"ContainerDied","Data":"aa7e63fea9a6ca2c2996b6ff99a2ab7084740ee334741a214e3c92a8f1b9b46c"} Jan 29 09:04:35 crc kubenswrapper[4895]: I0129 09:04:35.909390 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.018197 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-config-data\") pod \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.018281 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l49gc\" (UniqueName: \"kubernetes.io/projected/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-kube-api-access-l49gc\") pod \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.018402 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-combined-ca-bundle\") pod \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.018435 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-logs\") pod \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\" (UID: \"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf\") " Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.019754 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-logs" (OuterVolumeSpecName: "logs") pod "4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" (UID: "4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.026107 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-kube-api-access-l49gc" (OuterVolumeSpecName: "kube-api-access-l49gc") pod "4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" (UID: "4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf"). InnerVolumeSpecName "kube-api-access-l49gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:36 crc kubenswrapper[4895]: E0129 09:04:36.107292 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa7e63fea9a6ca2c2996b6ff99a2ab7084740ee334741a214e3c92a8f1b9b46c is running failed: container process not found" containerID="aa7e63fea9a6ca2c2996b6ff99a2ab7084740ee334741a214e3c92a8f1b9b46c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:04:36 crc kubenswrapper[4895]: E0129 09:04:36.109045 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa7e63fea9a6ca2c2996b6ff99a2ab7084740ee334741a214e3c92a8f1b9b46c is running failed: container process not found" containerID="aa7e63fea9a6ca2c2996b6ff99a2ab7084740ee334741a214e3c92a8f1b9b46c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.109219 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-config-data" (OuterVolumeSpecName: "config-data") pod "4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" (UID: "4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:36 crc kubenswrapper[4895]: E0129 09:04:36.109653 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa7e63fea9a6ca2c2996b6ff99a2ab7084740ee334741a214e3c92a8f1b9b46c is running failed: container process not found" containerID="aa7e63fea9a6ca2c2996b6ff99a2ab7084740ee334741a214e3c92a8f1b9b46c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:04:36 crc kubenswrapper[4895]: E0129 09:04:36.109704 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa7e63fea9a6ca2c2996b6ff99a2ab7084740ee334741a214e3c92a8f1b9b46c is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="af89c708-dad6-461e-a740-7c2f948e1f8a" containerName="nova-scheduler-scheduler" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.119830 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" (UID: "4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.120724 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.121052 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.121064 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l49gc\" (UniqueName: \"kubernetes.io/projected/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-kube-api-access-l49gc\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.192430 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.223529 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.386582 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.533463 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af89c708-dad6-461e-a740-7c2f948e1f8a-config-data\") pod \"af89c708-dad6-461e-a740-7c2f948e1f8a\" (UID: \"af89c708-dad6-461e-a740-7c2f948e1f8a\") " Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.534276 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm87v\" (UniqueName: \"kubernetes.io/projected/af89c708-dad6-461e-a740-7c2f948e1f8a-kube-api-access-tm87v\") pod \"af89c708-dad6-461e-a740-7c2f948e1f8a\" (UID: \"af89c708-dad6-461e-a740-7c2f948e1f8a\") " Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.534341 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af89c708-dad6-461e-a740-7c2f948e1f8a-combined-ca-bundle\") pod \"af89c708-dad6-461e-a740-7c2f948e1f8a\" (UID: \"af89c708-dad6-461e-a740-7c2f948e1f8a\") " Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.543333 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af89c708-dad6-461e-a740-7c2f948e1f8a-kube-api-access-tm87v" (OuterVolumeSpecName: "kube-api-access-tm87v") pod "af89c708-dad6-461e-a740-7c2f948e1f8a" (UID: "af89c708-dad6-461e-a740-7c2f948e1f8a"). InnerVolumeSpecName "kube-api-access-tm87v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.567693 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af89c708-dad6-461e-a740-7c2f948e1f8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af89c708-dad6-461e-a740-7c2f948e1f8a" (UID: "af89c708-dad6-461e-a740-7c2f948e1f8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.575827 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af89c708-dad6-461e-a740-7c2f948e1f8a-config-data" (OuterVolumeSpecName: "config-data") pod "af89c708-dad6-461e-a740-7c2f948e1f8a" (UID: "af89c708-dad6-461e-a740-7c2f948e1f8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.637001 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af89c708-dad6-461e-a740-7c2f948e1f8a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.637038 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm87v\" (UniqueName: \"kubernetes.io/projected/af89c708-dad6-461e-a740-7c2f948e1f8a-kube-api-access-tm87v\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.637051 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af89c708-dad6-461e-a740-7c2f948e1f8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.913719 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"af89c708-dad6-461e-a740-7c2f948e1f8a","Type":"ContainerDied","Data":"d81af6e1b3d458f7694552f1533c3c36a689bf781179e9a108329c01bc5fa02c"} Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.914186 4895 scope.go:117] "RemoveContainer" containerID="aa7e63fea9a6ca2c2996b6ff99a2ab7084740ee334741a214e3c92a8f1b9b46c" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.913789 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.917029 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4653a20c-bef0-463a-962d-f1f17b2011e3","Type":"ContainerStarted","Data":"2114522e0d0660d50f4e74100c3edf834210ecbbc66f379f6e649ba008af9a68"} Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.917556 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4653a20c-bef0-463a-962d-f1f17b2011e3","Type":"ContainerStarted","Data":"3552e8efaafb51086ae9896f5867f6512e31533a791f1815636267586d23e97a"} Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.917610 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.923876 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1a417ac6-d7a5-46e9-a456-f0a50beaa91d","Type":"ContainerStarted","Data":"c80c280f35505a58b77a6c3633478493128b81dd798cb4aae89814d83681521a"} Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.929151 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf","Type":"ContainerDied","Data":"43865fc48267ecacb5749069c02963215333d9cc8cf5b5b81a520e60e4640bcb"} Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.929349 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.942517 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.9424921189999997 podStartE2EDuration="2.942492119s" podCreationTimestamp="2026-01-29 09:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:04:36.935836201 +0000 UTC m=+1418.577344347" watchObservedRunningTime="2026-01-29 09:04:36.942492119 +0000 UTC m=+1418.584000265" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.952571 4895 scope.go:117] "RemoveContainer" containerID="23abee1ddf4e4ef6becf729c32788cbd894404eb9f76acb2316d78b764c52570" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.974614 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.985910 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.986637 4895 scope.go:117] "RemoveContainer" containerID="95bf2b5f94e757e7cbb11e38bd44aa8a3ea96bc91a70f56ac2ce4d1492b25a9d" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.996863 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:04:36 crc kubenswrapper[4895]: E0129 09:04:36.997550 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" containerName="nova-api-api" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.997578 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" containerName="nova-api-api" Jan 29 09:04:36 crc kubenswrapper[4895]: E0129 09:04:36.997619 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" containerName="nova-api-log" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.997629 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" containerName="nova-api-log" Jan 29 09:04:36 crc kubenswrapper[4895]: E0129 09:04:36.997649 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af89c708-dad6-461e-a740-7c2f948e1f8a" containerName="nova-scheduler-scheduler" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.997657 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="af89c708-dad6-461e-a740-7c2f948e1f8a" containerName="nova-scheduler-scheduler" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.998054 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" containerName="nova-api-api" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.998082 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" containerName="nova-api-log" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.998101 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="af89c708-dad6-461e-a740-7c2f948e1f8a" containerName="nova-scheduler-scheduler" Jan 29 09:04:36 crc kubenswrapper[4895]: I0129 09:04:36.999262 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.005122 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.010552 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.010524774 podStartE2EDuration="4.010524774s" podCreationTimestamp="2026-01-29 09:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:04:37.002574153 +0000 UTC m=+1418.644082309" watchObservedRunningTime="2026-01-29 09:04:37.010524774 +0000 UTC m=+1418.652032920" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.106134 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.192483 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.240751 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af89c708-dad6-461e-a740-7c2f948e1f8a" path="/var/lib/kubelet/pods/af89c708-dad6-461e-a740-7c2f948e1f8a/volumes" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.241713 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.245482 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ef3ca3-2112-4d1c-b10f-4b758945253f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"74ef3ca3-2112-4d1c-b10f-4b758945253f\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.245591 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dswjg\" (UniqueName: \"kubernetes.io/projected/74ef3ca3-2112-4d1c-b10f-4b758945253f-kube-api-access-dswjg\") pod \"nova-scheduler-0\" (UID: \"74ef3ca3-2112-4d1c-b10f-4b758945253f\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.245630 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ef3ca3-2112-4d1c-b10f-4b758945253f-config-data\") pod \"nova-scheduler-0\" (UID: \"74ef3ca3-2112-4d1c-b10f-4b758945253f\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.254929 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.257527 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.264826 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.270500 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.288235 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.288616 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="598e3a01-9620-4320-b00b-ac10baddb593" containerName="kube-state-metrics" containerID="cri-o://be34aa8314b085cbebf2822418324ea77c91ee8ffcb5ecd7cc5e0a3b50371264" gracePeriod=30 Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.347653 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dswjg\" (UniqueName: \"kubernetes.io/projected/74ef3ca3-2112-4d1c-b10f-4b758945253f-kube-api-access-dswjg\") pod \"nova-scheduler-0\" (UID: \"74ef3ca3-2112-4d1c-b10f-4b758945253f\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.348090 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3333219-55c5-4bbb-b48a-6f9e83849794-logs\") pod \"nova-api-0\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " pod="openstack/nova-api-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.348240 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ef3ca3-2112-4d1c-b10f-4b758945253f-config-data\") pod \"nova-scheduler-0\" (UID: \"74ef3ca3-2112-4d1c-b10f-4b758945253f\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.348385 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3333219-55c5-4bbb-b48a-6f9e83849794-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " pod="openstack/nova-api-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.348563 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rkcw\" (UniqueName: \"kubernetes.io/projected/c3333219-55c5-4bbb-b48a-6f9e83849794-kube-api-access-7rkcw\") pod \"nova-api-0\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " pod="openstack/nova-api-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.348821 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ef3ca3-2112-4d1c-b10f-4b758945253f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"74ef3ca3-2112-4d1c-b10f-4b758945253f\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.348860 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3333219-55c5-4bbb-b48a-6f9e83849794-config-data\") pod \"nova-api-0\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " pod="openstack/nova-api-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.361870 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ef3ca3-2112-4d1c-b10f-4b758945253f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"74ef3ca3-2112-4d1c-b10f-4b758945253f\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.362486 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ef3ca3-2112-4d1c-b10f-4b758945253f-config-data\") pod \"nova-scheduler-0\" (UID: \"74ef3ca3-2112-4d1c-b10f-4b758945253f\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.372807 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dswjg\" (UniqueName: \"kubernetes.io/projected/74ef3ca3-2112-4d1c-b10f-4b758945253f-kube-api-access-dswjg\") pod \"nova-scheduler-0\" (UID: \"74ef3ca3-2112-4d1c-b10f-4b758945253f\") " pod="openstack/nova-scheduler-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.409420 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.452031 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3333219-55c5-4bbb-b48a-6f9e83849794-logs\") pod \"nova-api-0\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " pod="openstack/nova-api-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.452103 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3333219-55c5-4bbb-b48a-6f9e83849794-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " pod="openstack/nova-api-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.452204 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rkcw\" (UniqueName: \"kubernetes.io/projected/c3333219-55c5-4bbb-b48a-6f9e83849794-kube-api-access-7rkcw\") pod \"nova-api-0\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " pod="openstack/nova-api-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.452511 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3333219-55c5-4bbb-b48a-6f9e83849794-config-data\") pod \"nova-api-0\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " pod="openstack/nova-api-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.453419 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3333219-55c5-4bbb-b48a-6f9e83849794-logs\") pod \"nova-api-0\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " pod="openstack/nova-api-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.458691 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3333219-55c5-4bbb-b48a-6f9e83849794-config-data\") pod \"nova-api-0\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " pod="openstack/nova-api-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.461551 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3333219-55c5-4bbb-b48a-6f9e83849794-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " pod="openstack/nova-api-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.475315 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rkcw\" (UniqueName: \"kubernetes.io/projected/c3333219-55c5-4bbb-b48a-6f9e83849794-kube-api-access-7rkcw\") pod \"nova-api-0\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " pod="openstack/nova-api-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.591820 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.944477 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.950803 4895 generic.go:334] "Generic (PLEG): container finished" podID="598e3a01-9620-4320-b00b-ac10baddb593" containerID="be34aa8314b085cbebf2822418324ea77c91ee8ffcb5ecd7cc5e0a3b50371264" exitCode=2 Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.950961 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"598e3a01-9620-4320-b00b-ac10baddb593","Type":"ContainerDied","Data":"be34aa8314b085cbebf2822418324ea77c91ee8ffcb5ecd7cc5e0a3b50371264"} Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.951024 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"598e3a01-9620-4320-b00b-ac10baddb593","Type":"ContainerDied","Data":"0e0c6e3fa6ed37f6c114da48aed29149b7bee4039d3d0f4081b4587c3ca08973"} Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.951048 4895 scope.go:117] "RemoveContainer" containerID="be34aa8314b085cbebf2822418324ea77c91ee8ffcb5ecd7cc5e0a3b50371264" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.951135 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.993471 4895 scope.go:117] "RemoveContainer" containerID="be34aa8314b085cbebf2822418324ea77c91ee8ffcb5ecd7cc5e0a3b50371264" Jan 29 09:04:37 crc kubenswrapper[4895]: E0129 09:04:37.994511 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be34aa8314b085cbebf2822418324ea77c91ee8ffcb5ecd7cc5e0a3b50371264\": container with ID starting with be34aa8314b085cbebf2822418324ea77c91ee8ffcb5ecd7cc5e0a3b50371264 not found: ID does not exist" containerID="be34aa8314b085cbebf2822418324ea77c91ee8ffcb5ecd7cc5e0a3b50371264" Jan 29 09:04:37 crc kubenswrapper[4895]: I0129 09:04:37.994559 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be34aa8314b085cbebf2822418324ea77c91ee8ffcb5ecd7cc5e0a3b50371264"} err="failed to get container status \"be34aa8314b085cbebf2822418324ea77c91ee8ffcb5ecd7cc5e0a3b50371264\": rpc error: code = NotFound desc = could not find container \"be34aa8314b085cbebf2822418324ea77c91ee8ffcb5ecd7cc5e0a3b50371264\": container with ID starting with be34aa8314b085cbebf2822418324ea77c91ee8ffcb5ecd7cc5e0a3b50371264 not found: ID does not exist" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.082674 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkg97\" (UniqueName: \"kubernetes.io/projected/598e3a01-9620-4320-b00b-ac10baddb593-kube-api-access-vkg97\") pod \"598e3a01-9620-4320-b00b-ac10baddb593\" (UID: \"598e3a01-9620-4320-b00b-ac10baddb593\") " Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.096359 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598e3a01-9620-4320-b00b-ac10baddb593-kube-api-access-vkg97" (OuterVolumeSpecName: "kube-api-access-vkg97") pod "598e3a01-9620-4320-b00b-ac10baddb593" (UID: "598e3a01-9620-4320-b00b-ac10baddb593"). InnerVolumeSpecName "kube-api-access-vkg97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.127634 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.199275 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkg97\" (UniqueName: \"kubernetes.io/projected/598e3a01-9620-4320-b00b-ac10baddb593-kube-api-access-vkg97\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:38 crc kubenswrapper[4895]: W0129 09:04:38.278604 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3333219_55c5_4bbb_b48a_6f9e83849794.slice/crio-e7e59123ccbed10bf3ad611eb657c3de2229194424f8005464aa5ec4fa6d00df WatchSource:0}: Error finding container e7e59123ccbed10bf3ad611eb657c3de2229194424f8005464aa5ec4fa6d00df: Status 404 returned error can't find the container with id e7e59123ccbed10bf3ad611eb657c3de2229194424f8005464aa5ec4fa6d00df Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.281280 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.401015 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.429068 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.448463 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:04:38 crc kubenswrapper[4895]: E0129 09:04:38.449210 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598e3a01-9620-4320-b00b-ac10baddb593" containerName="kube-state-metrics" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.449242 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="598e3a01-9620-4320-b00b-ac10baddb593" containerName="kube-state-metrics" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.449577 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="598e3a01-9620-4320-b00b-ac10baddb593" containerName="kube-state-metrics" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.450731 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.460295 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.467437 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.484049 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.610381 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/75059205-4797-4975-98d8-bcbf919748ba-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"75059205-4797-4975-98d8-bcbf919748ba\") " pod="openstack/kube-state-metrics-0" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.610486 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8zcx\" (UniqueName: \"kubernetes.io/projected/75059205-4797-4975-98d8-bcbf919748ba-kube-api-access-j8zcx\") pod \"kube-state-metrics-0\" (UID: \"75059205-4797-4975-98d8-bcbf919748ba\") " pod="openstack/kube-state-metrics-0" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.610586 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/75059205-4797-4975-98d8-bcbf919748ba-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"75059205-4797-4975-98d8-bcbf919748ba\") " pod="openstack/kube-state-metrics-0" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.610621 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75059205-4797-4975-98d8-bcbf919748ba-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"75059205-4797-4975-98d8-bcbf919748ba\") " pod="openstack/kube-state-metrics-0" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.713128 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/75059205-4797-4975-98d8-bcbf919748ba-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"75059205-4797-4975-98d8-bcbf919748ba\") " pod="openstack/kube-state-metrics-0" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.713229 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8zcx\" (UniqueName: \"kubernetes.io/projected/75059205-4797-4975-98d8-bcbf919748ba-kube-api-access-j8zcx\") pod \"kube-state-metrics-0\" (UID: \"75059205-4797-4975-98d8-bcbf919748ba\") " pod="openstack/kube-state-metrics-0" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.713301 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/75059205-4797-4975-98d8-bcbf919748ba-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"75059205-4797-4975-98d8-bcbf919748ba\") " pod="openstack/kube-state-metrics-0" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.713901 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75059205-4797-4975-98d8-bcbf919748ba-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"75059205-4797-4975-98d8-bcbf919748ba\") " pod="openstack/kube-state-metrics-0" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.724183 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/75059205-4797-4975-98d8-bcbf919748ba-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"75059205-4797-4975-98d8-bcbf919748ba\") " pod="openstack/kube-state-metrics-0" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.724209 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75059205-4797-4975-98d8-bcbf919748ba-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"75059205-4797-4975-98d8-bcbf919748ba\") " pod="openstack/kube-state-metrics-0" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.728678 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/75059205-4797-4975-98d8-bcbf919748ba-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"75059205-4797-4975-98d8-bcbf919748ba\") " pod="openstack/kube-state-metrics-0" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.735837 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8zcx\" (UniqueName: \"kubernetes.io/projected/75059205-4797-4975-98d8-bcbf919748ba-kube-api-access-j8zcx\") pod \"kube-state-metrics-0\" (UID: \"75059205-4797-4975-98d8-bcbf919748ba\") " pod="openstack/kube-state-metrics-0" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.865125 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.974843 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"74ef3ca3-2112-4d1c-b10f-4b758945253f","Type":"ContainerStarted","Data":"39430963f6521a41c42fbbb54c4a8d7756f3d578096a4b2182026b430ff2e09e"} Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.974936 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"74ef3ca3-2112-4d1c-b10f-4b758945253f","Type":"ContainerStarted","Data":"99d4be1539d7bed87df12209c78d6d9da980fcf8453c67708e1b83ceefdfa73c"} Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.982568 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3333219-55c5-4bbb-b48a-6f9e83849794","Type":"ContainerStarted","Data":"0e1afd25855d3fc44f5f8674b7e18630204b4d7ae42aef6c0de6085ddd18891e"} Jan 29 09:04:38 crc kubenswrapper[4895]: I0129 09:04:38.982615 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3333219-55c5-4bbb-b48a-6f9e83849794","Type":"ContainerStarted","Data":"e7e59123ccbed10bf3ad611eb657c3de2229194424f8005464aa5ec4fa6d00df"} Jan 29 09:04:39 crc kubenswrapper[4895]: I0129 09:04:39.230459 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf" path="/var/lib/kubelet/pods/4163e0dc-95a3-4e5b-b5b7-a104a9a0afbf/volumes" Jan 29 09:04:39 crc kubenswrapper[4895]: I0129 09:04:39.232256 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598e3a01-9620-4320-b00b-ac10baddb593" path="/var/lib/kubelet/pods/598e3a01-9620-4320-b00b-ac10baddb593/volumes" Jan 29 09:04:39 crc kubenswrapper[4895]: I0129 09:04:39.233112 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:04:39 crc kubenswrapper[4895]: I0129 09:04:39.233157 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:04:39 crc kubenswrapper[4895]: I0129 09:04:39.252622 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.252589797 podStartE2EDuration="3.252589797s" podCreationTimestamp="2026-01-29 09:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:04:39.00208174 +0000 UTC m=+1420.643589886" watchObservedRunningTime="2026-01-29 09:04:39.252589797 +0000 UTC m=+1420.894097943" Jan 29 09:04:39 crc kubenswrapper[4895]: W0129 09:04:39.441678 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75059205_4797_4975_98d8_bcbf919748ba.slice/crio-cdd18891847ab91e97413afd4d478f612b64508ad6653ecbb24608226b84120c WatchSource:0}: Error finding container cdd18891847ab91e97413afd4d478f612b64508ad6653ecbb24608226b84120c: Status 404 returned error can't find the container with id cdd18891847ab91e97413afd4d478f612b64508ad6653ecbb24608226b84120c Jan 29 09:04:39 crc kubenswrapper[4895]: I0129 09:04:39.449212 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 09:04:40 crc kubenswrapper[4895]: I0129 09:04:40.004795 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3333219-55c5-4bbb-b48a-6f9e83849794","Type":"ContainerStarted","Data":"ec7ecae43c22ddb5a941e912ee95f35abf8ff2136de3fcf686d868302ea51ddf"} Jan 29 09:04:40 crc kubenswrapper[4895]: I0129 09:04:40.010059 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"75059205-4797-4975-98d8-bcbf919748ba","Type":"ContainerStarted","Data":"cdd18891847ab91e97413afd4d478f612b64508ad6653ecbb24608226b84120c"} Jan 29 09:04:40 crc kubenswrapper[4895]: I0129 09:04:40.038878 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.038848407 podStartE2EDuration="3.038848407s" podCreationTimestamp="2026-01-29 09:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:04:40.038246831 +0000 UTC m=+1421.679754987" watchObservedRunningTime="2026-01-29 09:04:40.038848407 +0000 UTC m=+1421.680356553" Jan 29 09:04:40 crc kubenswrapper[4895]: I0129 09:04:40.423126 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:04:40 crc kubenswrapper[4895]: I0129 09:04:40.423986 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="ceilometer-central-agent" containerID="cri-o://40d7d2a28048f4c5b2109fcf99283eeac6dacb8837828e255ea08022393a1069" gracePeriod=30 Jan 29 09:04:40 crc kubenswrapper[4895]: I0129 09:04:40.424155 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="proxy-httpd" containerID="cri-o://3ea3cee743d6c39820726922743e6c1ca138f7432d76efaed68c43f9765d8b9e" gracePeriod=30 Jan 29 09:04:40 crc kubenswrapper[4895]: I0129 09:04:40.424209 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="sg-core" containerID="cri-o://8bc8e26b49a1d1a10666ea7e88a0eaaee0c4ab11f5e437464ccdac53743fa15e" gracePeriod=30 Jan 29 09:04:40 crc kubenswrapper[4895]: I0129 09:04:40.424249 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="ceilometer-notification-agent" containerID="cri-o://899a094a20448dc2bccb6eb2d248a52ab39e84bbc1ca2a0b16cb6a9cb1bba65f" gracePeriod=30 Jan 29 09:04:41 crc kubenswrapper[4895]: I0129 09:04:41.023237 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"75059205-4797-4975-98d8-bcbf919748ba","Type":"ContainerStarted","Data":"e581403f4e8c74c38fd573fcf4b565024161cf486fad2fc5fd5c715870b1f90a"} Jan 29 09:04:41 crc kubenswrapper[4895]: I0129 09:04:41.023416 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 09:04:41 crc kubenswrapper[4895]: I0129 09:04:41.028203 4895 generic.go:334] "Generic (PLEG): container finished" podID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerID="3ea3cee743d6c39820726922743e6c1ca138f7432d76efaed68c43f9765d8b9e" exitCode=0 Jan 29 09:04:41 crc kubenswrapper[4895]: I0129 09:04:41.028251 4895 generic.go:334] "Generic (PLEG): container finished" podID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerID="8bc8e26b49a1d1a10666ea7e88a0eaaee0c4ab11f5e437464ccdac53743fa15e" exitCode=2 Jan 29 09:04:41 crc kubenswrapper[4895]: I0129 09:04:41.028298 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a43fe081-4498-49ae-8dbb-a6068aadfb06","Type":"ContainerDied","Data":"3ea3cee743d6c39820726922743e6c1ca138f7432d76efaed68c43f9765d8b9e"} Jan 29 09:04:41 crc kubenswrapper[4895]: I0129 09:04:41.028380 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a43fe081-4498-49ae-8dbb-a6068aadfb06","Type":"ContainerDied","Data":"8bc8e26b49a1d1a10666ea7e88a0eaaee0c4ab11f5e437464ccdac53743fa15e"} Jan 29 09:04:41 crc kubenswrapper[4895]: I0129 09:04:41.079481 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.688929292 podStartE2EDuration="3.079454306s" podCreationTimestamp="2026-01-29 09:04:38 +0000 UTC" firstStartedPulling="2026-01-29 09:04:39.45570538 +0000 UTC m=+1421.097213526" lastFinishedPulling="2026-01-29 09:04:39.846230394 +0000 UTC m=+1421.487738540" observedRunningTime="2026-01-29 09:04:41.060798608 +0000 UTC m=+1422.702306754" watchObservedRunningTime="2026-01-29 09:04:41.079454306 +0000 UTC m=+1422.720962452" Jan 29 09:04:42 crc kubenswrapper[4895]: I0129 09:04:42.044158 4895 generic.go:334] "Generic (PLEG): container finished" podID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerID="40d7d2a28048f4c5b2109fcf99283eeac6dacb8837828e255ea08022393a1069" exitCode=0 Jan 29 09:04:42 crc kubenswrapper[4895]: I0129 09:04:42.044226 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a43fe081-4498-49ae-8dbb-a6068aadfb06","Type":"ContainerDied","Data":"40d7d2a28048f4c5b2109fcf99283eeac6dacb8837828e255ea08022393a1069"} Jan 29 09:04:42 crc kubenswrapper[4895]: I0129 09:04:42.409759 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.062755 4895 generic.go:334] "Generic (PLEG): container finished" podID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerID="899a094a20448dc2bccb6eb2d248a52ab39e84bbc1ca2a0b16cb6a9cb1bba65f" exitCode=0 Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.062815 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a43fe081-4498-49ae-8dbb-a6068aadfb06","Type":"ContainerDied","Data":"899a094a20448dc2bccb6eb2d248a52ab39e84bbc1ca2a0b16cb6a9cb1bba65f"} Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.063224 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a43fe081-4498-49ae-8dbb-a6068aadfb06","Type":"ContainerDied","Data":"bdb3a1630521d36010852c4c374d64f7a087490af366a8b0975463550050de1b"} Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.063240 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdb3a1630521d36010852c4c374d64f7a087490af366a8b0975463550050de1b" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.115425 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.158866 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-scripts\") pod \"a43fe081-4498-49ae-8dbb-a6068aadfb06\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.159220 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnfhz\" (UniqueName: \"kubernetes.io/projected/a43fe081-4498-49ae-8dbb-a6068aadfb06-kube-api-access-cnfhz\") pod \"a43fe081-4498-49ae-8dbb-a6068aadfb06\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.159271 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-sg-core-conf-yaml\") pod \"a43fe081-4498-49ae-8dbb-a6068aadfb06\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.159340 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-combined-ca-bundle\") pod \"a43fe081-4498-49ae-8dbb-a6068aadfb06\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.159398 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a43fe081-4498-49ae-8dbb-a6068aadfb06-log-httpd\") pod \"a43fe081-4498-49ae-8dbb-a6068aadfb06\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.159420 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a43fe081-4498-49ae-8dbb-a6068aadfb06-run-httpd\") pod \"a43fe081-4498-49ae-8dbb-a6068aadfb06\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.159449 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-config-data\") pod \"a43fe081-4498-49ae-8dbb-a6068aadfb06\" (UID: \"a43fe081-4498-49ae-8dbb-a6068aadfb06\") " Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.165356 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a43fe081-4498-49ae-8dbb-a6068aadfb06-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a43fe081-4498-49ae-8dbb-a6068aadfb06" (UID: "a43fe081-4498-49ae-8dbb-a6068aadfb06"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.165981 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a43fe081-4498-49ae-8dbb-a6068aadfb06-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a43fe081-4498-49ae-8dbb-a6068aadfb06" (UID: "a43fe081-4498-49ae-8dbb-a6068aadfb06"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.179060 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43fe081-4498-49ae-8dbb-a6068aadfb06-kube-api-access-cnfhz" (OuterVolumeSpecName: "kube-api-access-cnfhz") pod "a43fe081-4498-49ae-8dbb-a6068aadfb06" (UID: "a43fe081-4498-49ae-8dbb-a6068aadfb06"). InnerVolumeSpecName "kube-api-access-cnfhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.190549 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-scripts" (OuterVolumeSpecName: "scripts") pod "a43fe081-4498-49ae-8dbb-a6068aadfb06" (UID: "a43fe081-4498-49ae-8dbb-a6068aadfb06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.235335 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a43fe081-4498-49ae-8dbb-a6068aadfb06" (UID: "a43fe081-4498-49ae-8dbb-a6068aadfb06"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.277113 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnfhz\" (UniqueName: \"kubernetes.io/projected/a43fe081-4498-49ae-8dbb-a6068aadfb06-kube-api-access-cnfhz\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.283372 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.283476 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a43fe081-4498-49ae-8dbb-a6068aadfb06-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.283515 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a43fe081-4498-49ae-8dbb-a6068aadfb06-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.283529 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.338660 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-config-data" (OuterVolumeSpecName: "config-data") pod "a43fe081-4498-49ae-8dbb-a6068aadfb06" (UID: "a43fe081-4498-49ae-8dbb-a6068aadfb06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.351421 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a43fe081-4498-49ae-8dbb-a6068aadfb06" (UID: "a43fe081-4498-49ae-8dbb-a6068aadfb06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.386268 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:43 crc kubenswrapper[4895]: I0129 09:04:43.386317 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43fe081-4498-49ae-8dbb-a6068aadfb06-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.072007 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.107430 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.117692 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.131026 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:04:44 crc kubenswrapper[4895]: E0129 09:04:44.131503 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="sg-core" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.131524 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="sg-core" Jan 29 09:04:44 crc kubenswrapper[4895]: E0129 09:04:44.131539 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="proxy-httpd" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.131546 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="proxy-httpd" Jan 29 09:04:44 crc kubenswrapper[4895]: E0129 09:04:44.131557 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="ceilometer-notification-agent" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.131564 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="ceilometer-notification-agent" Jan 29 09:04:44 crc kubenswrapper[4895]: E0129 09:04:44.131581 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="ceilometer-central-agent" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.131587 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="ceilometer-central-agent" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.131771 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="ceilometer-notification-agent" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.131787 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="ceilometer-central-agent" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.131807 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="sg-core" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.131816 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" containerName="proxy-httpd" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.133751 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.137630 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.138130 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.139470 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.147653 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.206752 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.206901 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-scripts\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.207023 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.207202 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-config-data\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.207284 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/470474b6-2796-4bac-b679-8dd1d9a043f2-run-httpd\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.207575 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbz6k\" (UniqueName: \"kubernetes.io/projected/470474b6-2796-4bac-b679-8dd1d9a043f2-kube-api-access-cbz6k\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.207799 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/470474b6-2796-4bac-b679-8dd1d9a043f2-log-httpd\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.208032 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.233182 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.233745 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.311019 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/470474b6-2796-4bac-b679-8dd1d9a043f2-log-httpd\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.311705 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/470474b6-2796-4bac-b679-8dd1d9a043f2-log-httpd\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.312652 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.313661 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.313732 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-scripts\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.313889 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.314176 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-config-data\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.314283 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/470474b6-2796-4bac-b679-8dd1d9a043f2-run-httpd\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.314391 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbz6k\" (UniqueName: \"kubernetes.io/projected/470474b6-2796-4bac-b679-8dd1d9a043f2-kube-api-access-cbz6k\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.315849 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/470474b6-2796-4bac-b679-8dd1d9a043f2-run-httpd\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.318179 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.320483 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.321592 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-config-data\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.322084 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-scripts\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.323405 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.334322 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbz6k\" (UniqueName: \"kubernetes.io/projected/470474b6-2796-4bac-b679-8dd1d9a043f2-kube-api-access-cbz6k\") pod \"ceilometer-0\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " pod="openstack/ceilometer-0" Jan 29 09:04:44 crc kubenswrapper[4895]: I0129 09:04:44.456133 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:04:45 crc kubenswrapper[4895]: I0129 09:04:45.004330 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:04:45 crc kubenswrapper[4895]: I0129 09:04:45.087673 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"470474b6-2796-4bac-b679-8dd1d9a043f2","Type":"ContainerStarted","Data":"7f3350c91859efe856c299ca439edb19e2a205285a0d2d60a3d8e215e97c358b"} Jan 29 09:04:45 crc kubenswrapper[4895]: I0129 09:04:45.238656 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a43fe081-4498-49ae-8dbb-a6068aadfb06" path="/var/lib/kubelet/pods/a43fe081-4498-49ae-8dbb-a6068aadfb06/volumes" Jan 29 09:04:45 crc kubenswrapper[4895]: I0129 09:04:45.247251 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:04:45 crc kubenswrapper[4895]: I0129 09:04:45.247332 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:04:45 crc kubenswrapper[4895]: I0129 09:04:45.260140 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 09:04:46 crc kubenswrapper[4895]: I0129 09:04:46.100954 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"470474b6-2796-4bac-b679-8dd1d9a043f2","Type":"ContainerStarted","Data":"571acce083a56e2e6183efb9d84a23c2881ab9913fc8dfab14104083091392fd"} Jan 29 09:04:47 crc kubenswrapper[4895]: I0129 09:04:47.118408 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"470474b6-2796-4bac-b679-8dd1d9a043f2","Type":"ContainerStarted","Data":"74abc32ad1460f0af3918a697b0c43a92eb5c6837ae538dcdbcfa55589f796f2"} Jan 29 09:04:47 crc kubenswrapper[4895]: I0129 09:04:47.410365 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 09:04:47 crc kubenswrapper[4895]: I0129 09:04:47.447701 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 09:04:47 crc kubenswrapper[4895]: I0129 09:04:47.593465 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:04:47 crc kubenswrapper[4895]: I0129 09:04:47.593539 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:04:48 crc kubenswrapper[4895]: I0129 09:04:48.134135 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"470474b6-2796-4bac-b679-8dd1d9a043f2","Type":"ContainerStarted","Data":"c19cceb25246c2b90cd64e89f22b7d4ddc23e5b9db63a5827b4465a31614fa1c"} Jan 29 09:04:48 crc kubenswrapper[4895]: I0129 09:04:48.168477 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 09:04:48 crc kubenswrapper[4895]: I0129 09:04:48.676229 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c3333219-55c5-4bbb-b48a-6f9e83849794" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:04:48 crc kubenswrapper[4895]: I0129 09:04:48.676269 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c3333219-55c5-4bbb-b48a-6f9e83849794" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:04:48 crc kubenswrapper[4895]: I0129 09:04:48.886281 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 09:04:50 crc kubenswrapper[4895]: I0129 09:04:50.158844 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"470474b6-2796-4bac-b679-8dd1d9a043f2","Type":"ContainerStarted","Data":"01b55a0f2a5ad462bcfd7733b920b35f7aa7ab9898d5ce83015e9edaa07c74fb"} Jan 29 09:04:50 crc kubenswrapper[4895]: I0129 09:04:50.159260 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:04:50 crc kubenswrapper[4895]: I0129 09:04:50.185789 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.272710953 podStartE2EDuration="6.185755104s" podCreationTimestamp="2026-01-29 09:04:44 +0000 UTC" firstStartedPulling="2026-01-29 09:04:45.005628067 +0000 UTC m=+1426.647136213" lastFinishedPulling="2026-01-29 09:04:48.918672228 +0000 UTC m=+1430.560180364" observedRunningTime="2026-01-29 09:04:50.183770201 +0000 UTC m=+1431.825278347" watchObservedRunningTime="2026-01-29 09:04:50.185755104 +0000 UTC m=+1431.827263250" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.140448 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.214994 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3405b2be-d52c-4e7e-846e-8ae737452bae-combined-ca-bundle\") pod \"3405b2be-d52c-4e7e-846e-8ae737452bae\" (UID: \"3405b2be-d52c-4e7e-846e-8ae737452bae\") " Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.215073 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgmbj\" (UniqueName: \"kubernetes.io/projected/3405b2be-d52c-4e7e-846e-8ae737452bae-kube-api-access-xgmbj\") pod \"3405b2be-d52c-4e7e-846e-8ae737452bae\" (UID: \"3405b2be-d52c-4e7e-846e-8ae737452bae\") " Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.215183 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3405b2be-d52c-4e7e-846e-8ae737452bae-config-data\") pod \"3405b2be-d52c-4e7e-846e-8ae737452bae\" (UID: \"3405b2be-d52c-4e7e-846e-8ae737452bae\") " Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.220694 4895 generic.go:334] "Generic (PLEG): container finished" podID="3405b2be-d52c-4e7e-846e-8ae737452bae" containerID="c683b0b155bae1b66244ed1ea58e75157822d2dbae1ca853b5f15574f4ba1d2e" exitCode=137 Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.220802 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.233329 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3405b2be-d52c-4e7e-846e-8ae737452bae-kube-api-access-xgmbj" (OuterVolumeSpecName: "kube-api-access-xgmbj") pod "3405b2be-d52c-4e7e-846e-8ae737452bae" (UID: "3405b2be-d52c-4e7e-846e-8ae737452bae"). InnerVolumeSpecName "kube-api-access-xgmbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.266287 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3405b2be-d52c-4e7e-846e-8ae737452bae-config-data" (OuterVolumeSpecName: "config-data") pod "3405b2be-d52c-4e7e-846e-8ae737452bae" (UID: "3405b2be-d52c-4e7e-846e-8ae737452bae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.269773 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3405b2be-d52c-4e7e-846e-8ae737452bae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3405b2be-d52c-4e7e-846e-8ae737452bae" (UID: "3405b2be-d52c-4e7e-846e-8ae737452bae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.318500 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3405b2be-d52c-4e7e-846e-8ae737452bae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.318871 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgmbj\" (UniqueName: \"kubernetes.io/projected/3405b2be-d52c-4e7e-846e-8ae737452bae-kube-api-access-xgmbj\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.318997 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3405b2be-d52c-4e7e-846e-8ae737452bae-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.333997 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3405b2be-d52c-4e7e-846e-8ae737452bae","Type":"ContainerDied","Data":"c683b0b155bae1b66244ed1ea58e75157822d2dbae1ca853b5f15574f4ba1d2e"} Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.334064 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3405b2be-d52c-4e7e-846e-8ae737452bae","Type":"ContainerDied","Data":"aff66b0b00d4c31c14972f0f51061b56c7a4f7e4688607f5c07ec94e76e1b800"} Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.334088 4895 scope.go:117] "RemoveContainer" containerID="c683b0b155bae1b66244ed1ea58e75157822d2dbae1ca853b5f15574f4ba1d2e" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.361843 4895 scope.go:117] "RemoveContainer" containerID="c683b0b155bae1b66244ed1ea58e75157822d2dbae1ca853b5f15574f4ba1d2e" Jan 29 09:04:53 crc kubenswrapper[4895]: E0129 09:04:53.362654 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c683b0b155bae1b66244ed1ea58e75157822d2dbae1ca853b5f15574f4ba1d2e\": container with ID starting with c683b0b155bae1b66244ed1ea58e75157822d2dbae1ca853b5f15574f4ba1d2e not found: ID does not exist" containerID="c683b0b155bae1b66244ed1ea58e75157822d2dbae1ca853b5f15574f4ba1d2e" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.362818 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c683b0b155bae1b66244ed1ea58e75157822d2dbae1ca853b5f15574f4ba1d2e"} err="failed to get container status \"c683b0b155bae1b66244ed1ea58e75157822d2dbae1ca853b5f15574f4ba1d2e\": rpc error: code = NotFound desc = could not find container \"c683b0b155bae1b66244ed1ea58e75157822d2dbae1ca853b5f15574f4ba1d2e\": container with ID starting with c683b0b155bae1b66244ed1ea58e75157822d2dbae1ca853b5f15574f4ba1d2e not found: ID does not exist" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.711094 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.741179 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.782001 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:04:53 crc kubenswrapper[4895]: E0129 09:04:53.782601 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3405b2be-d52c-4e7e-846e-8ae737452bae" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.782628 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3405b2be-d52c-4e7e-846e-8ae737452bae" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.782888 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3405b2be-d52c-4e7e-846e-8ae737452bae" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.796409 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.800838 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.801084 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.801165 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.824275 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.940661 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212553fe-f689-4d32-9368-e1f5a6a9654d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.940804 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/212553fe-f689-4d32-9368-e1f5a6a9654d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.940851 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/212553fe-f689-4d32-9368-e1f5a6a9654d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.940910 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxk88\" (UniqueName: \"kubernetes.io/projected/212553fe-f689-4d32-9368-e1f5a6a9654d-kube-api-access-xxk88\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:53 crc kubenswrapper[4895]: I0129 09:04:53.940955 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/212553fe-f689-4d32-9368-e1f5a6a9654d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.043838 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxk88\" (UniqueName: \"kubernetes.io/projected/212553fe-f689-4d32-9368-e1f5a6a9654d-kube-api-access-xxk88\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.043928 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/212553fe-f689-4d32-9368-e1f5a6a9654d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.044014 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212553fe-f689-4d32-9368-e1f5a6a9654d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.044128 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/212553fe-f689-4d32-9368-e1f5a6a9654d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.044168 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/212553fe-f689-4d32-9368-e1f5a6a9654d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.052798 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/212553fe-f689-4d32-9368-e1f5a6a9654d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.056564 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/212553fe-f689-4d32-9368-e1f5a6a9654d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.063347 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/212553fe-f689-4d32-9368-e1f5a6a9654d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.066660 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212553fe-f689-4d32-9368-e1f5a6a9654d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.070053 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxk88\" (UniqueName: \"kubernetes.io/projected/212553fe-f689-4d32-9368-e1f5a6a9654d-kube-api-access-xxk88\") pod \"nova-cell1-novncproxy-0\" (UID: \"212553fe-f689-4d32-9368-e1f5a6a9654d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.125338 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.240732 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.242553 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.251203 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 09:04:54 crc kubenswrapper[4895]: I0129 09:04:54.691305 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 09:04:54 crc kubenswrapper[4895]: W0129 09:04:54.699196 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod212553fe_f689_4d32_9368_e1f5a6a9654d.slice/crio-76ba4b7150411a72d36c71b8a771047a38983c4b5bd94202553f182aa829f6d8 WatchSource:0}: Error finding container 76ba4b7150411a72d36c71b8a771047a38983c4b5bd94202553f182aa829f6d8: Status 404 returned error can't find the container with id 76ba4b7150411a72d36c71b8a771047a38983c4b5bd94202553f182aa829f6d8 Jan 29 09:04:55 crc kubenswrapper[4895]: I0129 09:04:55.225801 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3405b2be-d52c-4e7e-846e-8ae737452bae" path="/var/lib/kubelet/pods/3405b2be-d52c-4e7e-846e-8ae737452bae/volumes" Jan 29 09:04:55 crc kubenswrapper[4895]: I0129 09:04:55.254716 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"212553fe-f689-4d32-9368-e1f5a6a9654d","Type":"ContainerStarted","Data":"9c303da5fb58007186c3d26a02948be7716ad68aa785cb37740e8ee4ea058fbc"} Jan 29 09:04:55 crc kubenswrapper[4895]: I0129 09:04:55.254778 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"212553fe-f689-4d32-9368-e1f5a6a9654d","Type":"ContainerStarted","Data":"76ba4b7150411a72d36c71b8a771047a38983c4b5bd94202553f182aa829f6d8"} Jan 29 09:04:55 crc kubenswrapper[4895]: I0129 09:04:55.260139 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 09:04:55 crc kubenswrapper[4895]: I0129 09:04:55.283825 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.283779688 podStartE2EDuration="2.283779688s" podCreationTimestamp="2026-01-29 09:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:04:55.278128997 +0000 UTC m=+1436.919637163" watchObservedRunningTime="2026-01-29 09:04:55.283779688 +0000 UTC m=+1436.925287834" Jan 29 09:04:57 crc kubenswrapper[4895]: I0129 09:04:57.646715 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 09:04:57 crc kubenswrapper[4895]: I0129 09:04:57.648320 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 09:04:57 crc kubenswrapper[4895]: I0129 09:04:57.745354 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 09:04:57 crc kubenswrapper[4895]: I0129 09:04:57.775577 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.287682 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.293267 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.497119 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-82zv8"] Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.499206 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.530283 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-82zv8"] Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.666786 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hptlq\" (UniqueName: \"kubernetes.io/projected/d9878c3e-4959-4f63-bfc3-899f9a55eee2-kube-api-access-hptlq\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.666998 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.667032 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.667060 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-config\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.667126 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.667155 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.769344 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hptlq\" (UniqueName: \"kubernetes.io/projected/d9878c3e-4959-4f63-bfc3-899f9a55eee2-kube-api-access-hptlq\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.769520 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.769554 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.769581 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-config\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.769639 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.769666 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.771165 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.771195 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.771256 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.773913 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-config\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.773981 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9878c3e-4959-4f63-bfc3-899f9a55eee2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.795018 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hptlq\" (UniqueName: \"kubernetes.io/projected/d9878c3e-4959-4f63-bfc3-899f9a55eee2-kube-api-access-hptlq\") pod \"dnsmasq-dns-cd5cbd7b9-82zv8\" (UID: \"d9878c3e-4959-4f63-bfc3-899f9a55eee2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:58 crc kubenswrapper[4895]: I0129 09:04:58.854798 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:04:59 crc kubenswrapper[4895]: I0129 09:04:59.126187 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:04:59 crc kubenswrapper[4895]: I0129 09:04:59.447458 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-82zv8"] Jan 29 09:05:00 crc kubenswrapper[4895]: I0129 09:05:00.354788 4895 generic.go:334] "Generic (PLEG): container finished" podID="d9878c3e-4959-4f63-bfc3-899f9a55eee2" containerID="1749ce46709d9f68e127223dc3d7961f5d71a3904a3f8da3b973df80efb3b35a" exitCode=0 Jan 29 09:05:00 crc kubenswrapper[4895]: I0129 09:05:00.354943 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" event={"ID":"d9878c3e-4959-4f63-bfc3-899f9a55eee2","Type":"ContainerDied","Data":"1749ce46709d9f68e127223dc3d7961f5d71a3904a3f8da3b973df80efb3b35a"} Jan 29 09:05:00 crc kubenswrapper[4895]: I0129 09:05:00.355297 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" event={"ID":"d9878c3e-4959-4f63-bfc3-899f9a55eee2","Type":"ContainerStarted","Data":"a6c2b0959448272c23ad0346e32be89c076f79d8ece4443665795b6128e5d625"} Jan 29 09:05:01 crc kubenswrapper[4895]: I0129 09:05:01.282546 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:05:01 crc kubenswrapper[4895]: I0129 09:05:01.369494 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" event={"ID":"d9878c3e-4959-4f63-bfc3-899f9a55eee2","Type":"ContainerStarted","Data":"2e930f4ddd411755c3c544d6a3ebc4ee57fd9a7b0862a3e598bdf6bfcd7282ee"} Jan 29 09:05:01 crc kubenswrapper[4895]: I0129 09:05:01.369727 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c3333219-55c5-4bbb-b48a-6f9e83849794" containerName="nova-api-log" containerID="cri-o://0e1afd25855d3fc44f5f8674b7e18630204b4d7ae42aef6c0de6085ddd18891e" gracePeriod=30 Jan 29 09:05:01 crc kubenswrapper[4895]: I0129 09:05:01.370529 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c3333219-55c5-4bbb-b48a-6f9e83849794" containerName="nova-api-api" containerID="cri-o://ec7ecae43c22ddb5a941e912ee95f35abf8ff2136de3fcf686d868302ea51ddf" gracePeriod=30 Jan 29 09:05:01 crc kubenswrapper[4895]: I0129 09:05:01.405208 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" podStartSLOduration=3.405178121 podStartE2EDuration="3.405178121s" podCreationTimestamp="2026-01-29 09:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:05:01.398052871 +0000 UTC m=+1443.039561027" watchObservedRunningTime="2026-01-29 09:05:01.405178121 +0000 UTC m=+1443.046686267" Jan 29 09:05:01 crc kubenswrapper[4895]: I0129 09:05:01.612365 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:05:01 crc kubenswrapper[4895]: I0129 09:05:01.612830 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="ceilometer-central-agent" containerID="cri-o://571acce083a56e2e6183efb9d84a23c2881ab9913fc8dfab14104083091392fd" gracePeriod=30 Jan 29 09:05:01 crc kubenswrapper[4895]: I0129 09:05:01.612874 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="sg-core" containerID="cri-o://c19cceb25246c2b90cd64e89f22b7d4ddc23e5b9db63a5827b4465a31614fa1c" gracePeriod=30 Jan 29 09:05:01 crc kubenswrapper[4895]: I0129 09:05:01.612845 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="proxy-httpd" containerID="cri-o://01b55a0f2a5ad462bcfd7733b920b35f7aa7ab9898d5ce83015e9edaa07c74fb" gracePeriod=30 Jan 29 09:05:01 crc kubenswrapper[4895]: I0129 09:05:01.613145 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="ceilometer-notification-agent" containerID="cri-o://74abc32ad1460f0af3918a697b0c43a92eb5c6837ae538dcdbcfa55589f796f2" gracePeriod=30 Jan 29 09:05:01 crc kubenswrapper[4895]: I0129 09:05:01.619326 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.206:3000/\": read tcp 10.217.0.2:55914->10.217.0.206:3000: read: connection reset by peer" Jan 29 09:05:02 crc kubenswrapper[4895]: I0129 09:05:02.387430 4895 generic.go:334] "Generic (PLEG): container finished" podID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerID="01b55a0f2a5ad462bcfd7733b920b35f7aa7ab9898d5ce83015e9edaa07c74fb" exitCode=0 Jan 29 09:05:02 crc kubenswrapper[4895]: I0129 09:05:02.387983 4895 generic.go:334] "Generic (PLEG): container finished" podID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerID="c19cceb25246c2b90cd64e89f22b7d4ddc23e5b9db63a5827b4465a31614fa1c" exitCode=2 Jan 29 09:05:02 crc kubenswrapper[4895]: I0129 09:05:02.388013 4895 generic.go:334] "Generic (PLEG): container finished" podID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerID="571acce083a56e2e6183efb9d84a23c2881ab9913fc8dfab14104083091392fd" exitCode=0 Jan 29 09:05:02 crc kubenswrapper[4895]: I0129 09:05:02.387523 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"470474b6-2796-4bac-b679-8dd1d9a043f2","Type":"ContainerDied","Data":"01b55a0f2a5ad462bcfd7733b920b35f7aa7ab9898d5ce83015e9edaa07c74fb"} Jan 29 09:05:02 crc kubenswrapper[4895]: I0129 09:05:02.388078 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"470474b6-2796-4bac-b679-8dd1d9a043f2","Type":"ContainerDied","Data":"c19cceb25246c2b90cd64e89f22b7d4ddc23e5b9db63a5827b4465a31614fa1c"} Jan 29 09:05:02 crc kubenswrapper[4895]: I0129 09:05:02.388095 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"470474b6-2796-4bac-b679-8dd1d9a043f2","Type":"ContainerDied","Data":"571acce083a56e2e6183efb9d84a23c2881ab9913fc8dfab14104083091392fd"} Jan 29 09:05:02 crc kubenswrapper[4895]: I0129 09:05:02.392432 4895 generic.go:334] "Generic (PLEG): container finished" podID="c3333219-55c5-4bbb-b48a-6f9e83849794" containerID="0e1afd25855d3fc44f5f8674b7e18630204b4d7ae42aef6c0de6085ddd18891e" exitCode=143 Jan 29 09:05:02 crc kubenswrapper[4895]: I0129 09:05:02.392533 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3333219-55c5-4bbb-b48a-6f9e83849794","Type":"ContainerDied","Data":"0e1afd25855d3fc44f5f8674b7e18630204b4d7ae42aef6c0de6085ddd18891e"} Jan 29 09:05:02 crc kubenswrapper[4895]: I0129 09:05:02.393414 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.011935 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.126266 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.149006 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.212120 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-sg-core-conf-yaml\") pod \"470474b6-2796-4bac-b679-8dd1d9a043f2\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.212190 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/470474b6-2796-4bac-b679-8dd1d9a043f2-log-httpd\") pod \"470474b6-2796-4bac-b679-8dd1d9a043f2\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.212247 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-config-data\") pod \"470474b6-2796-4bac-b679-8dd1d9a043f2\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.212436 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/470474b6-2796-4bac-b679-8dd1d9a043f2-run-httpd\") pod \"470474b6-2796-4bac-b679-8dd1d9a043f2\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.212736 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/470474b6-2796-4bac-b679-8dd1d9a043f2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "470474b6-2796-4bac-b679-8dd1d9a043f2" (UID: "470474b6-2796-4bac-b679-8dd1d9a043f2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.212753 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/470474b6-2796-4bac-b679-8dd1d9a043f2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "470474b6-2796-4bac-b679-8dd1d9a043f2" (UID: "470474b6-2796-4bac-b679-8dd1d9a043f2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.212960 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-scripts\") pod \"470474b6-2796-4bac-b679-8dd1d9a043f2\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.213027 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-combined-ca-bundle\") pod \"470474b6-2796-4bac-b679-8dd1d9a043f2\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.213799 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbz6k\" (UniqueName: \"kubernetes.io/projected/470474b6-2796-4bac-b679-8dd1d9a043f2-kube-api-access-cbz6k\") pod \"470474b6-2796-4bac-b679-8dd1d9a043f2\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.213864 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-ceilometer-tls-certs\") pod \"470474b6-2796-4bac-b679-8dd1d9a043f2\" (UID: \"470474b6-2796-4bac-b679-8dd1d9a043f2\") " Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.214464 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/470474b6-2796-4bac-b679-8dd1d9a043f2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.214485 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/470474b6-2796-4bac-b679-8dd1d9a043f2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.219849 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/470474b6-2796-4bac-b679-8dd1d9a043f2-kube-api-access-cbz6k" (OuterVolumeSpecName: "kube-api-access-cbz6k") pod "470474b6-2796-4bac-b679-8dd1d9a043f2" (UID: "470474b6-2796-4bac-b679-8dd1d9a043f2"). InnerVolumeSpecName "kube-api-access-cbz6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.227297 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-scripts" (OuterVolumeSpecName: "scripts") pod "470474b6-2796-4bac-b679-8dd1d9a043f2" (UID: "470474b6-2796-4bac-b679-8dd1d9a043f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.249232 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "470474b6-2796-4bac-b679-8dd1d9a043f2" (UID: "470474b6-2796-4bac-b679-8dd1d9a043f2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.324015 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.324105 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbz6k\" (UniqueName: \"kubernetes.io/projected/470474b6-2796-4bac-b679-8dd1d9a043f2-kube-api-access-cbz6k\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.324127 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.358277 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "470474b6-2796-4bac-b679-8dd1d9a043f2" (UID: "470474b6-2796-4bac-b679-8dd1d9a043f2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.421171 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "470474b6-2796-4bac-b679-8dd1d9a043f2" (UID: "470474b6-2796-4bac-b679-8dd1d9a043f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.427561 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.427608 4895 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.430637 4895 generic.go:334] "Generic (PLEG): container finished" podID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerID="74abc32ad1460f0af3918a697b0c43a92eb5c6837ae538dcdbcfa55589f796f2" exitCode=0 Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.432284 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.432375 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-config-data" (OuterVolumeSpecName: "config-data") pod "470474b6-2796-4bac-b679-8dd1d9a043f2" (UID: "470474b6-2796-4bac-b679-8dd1d9a043f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.432473 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"470474b6-2796-4bac-b679-8dd1d9a043f2","Type":"ContainerDied","Data":"74abc32ad1460f0af3918a697b0c43a92eb5c6837ae538dcdbcfa55589f796f2"} Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.432554 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"470474b6-2796-4bac-b679-8dd1d9a043f2","Type":"ContainerDied","Data":"7f3350c91859efe856c299ca439edb19e2a205285a0d2d60a3d8e215e97c358b"} Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.432583 4895 scope.go:117] "RemoveContainer" containerID="01b55a0f2a5ad462bcfd7733b920b35f7aa7ab9898d5ce83015e9edaa07c74fb" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.531736 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470474b6-2796-4bac-b679-8dd1d9a043f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.558445 4895 scope.go:117] "RemoveContainer" containerID="c19cceb25246c2b90cd64e89f22b7d4ddc23e5b9db63a5827b4465a31614fa1c" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.587269 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.614379 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.625472 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.666606 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:05:04 crc kubenswrapper[4895]: E0129 09:05:04.667257 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="ceilometer-notification-agent" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.667278 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="ceilometer-notification-agent" Jan 29 09:05:04 crc kubenswrapper[4895]: E0129 09:05:04.667303 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="sg-core" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.667312 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="sg-core" Jan 29 09:05:04 crc kubenswrapper[4895]: E0129 09:05:04.667332 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="proxy-httpd" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.667341 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="proxy-httpd" Jan 29 09:05:04 crc kubenswrapper[4895]: E0129 09:05:04.667351 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="ceilometer-central-agent" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.667357 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="ceilometer-central-agent" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.667578 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="sg-core" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.667596 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="ceilometer-central-agent" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.667605 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="ceilometer-notification-agent" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.667619 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" containerName="proxy-httpd" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.669671 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.677735 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.677816 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.678066 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.688076 4895 scope.go:117] "RemoveContainer" containerID="74abc32ad1460f0af3918a697b0c43a92eb5c6837ae538dcdbcfa55589f796f2" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.692779 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.777858 4895 scope.go:117] "RemoveContainer" containerID="571acce083a56e2e6183efb9d84a23c2881ab9913fc8dfab14104083091392fd" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.817513 4895 scope.go:117] "RemoveContainer" containerID="01b55a0f2a5ad462bcfd7733b920b35f7aa7ab9898d5ce83015e9edaa07c74fb" Jan 29 09:05:04 crc kubenswrapper[4895]: E0129 09:05:04.818297 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01b55a0f2a5ad462bcfd7733b920b35f7aa7ab9898d5ce83015e9edaa07c74fb\": container with ID starting with 01b55a0f2a5ad462bcfd7733b920b35f7aa7ab9898d5ce83015e9edaa07c74fb not found: ID does not exist" containerID="01b55a0f2a5ad462bcfd7733b920b35f7aa7ab9898d5ce83015e9edaa07c74fb" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.818345 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b55a0f2a5ad462bcfd7733b920b35f7aa7ab9898d5ce83015e9edaa07c74fb"} err="failed to get container status \"01b55a0f2a5ad462bcfd7733b920b35f7aa7ab9898d5ce83015e9edaa07c74fb\": rpc error: code = NotFound desc = could not find container \"01b55a0f2a5ad462bcfd7733b920b35f7aa7ab9898d5ce83015e9edaa07c74fb\": container with ID starting with 01b55a0f2a5ad462bcfd7733b920b35f7aa7ab9898d5ce83015e9edaa07c74fb not found: ID does not exist" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.818381 4895 scope.go:117] "RemoveContainer" containerID="c19cceb25246c2b90cd64e89f22b7d4ddc23e5b9db63a5827b4465a31614fa1c" Jan 29 09:05:04 crc kubenswrapper[4895]: E0129 09:05:04.819203 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c19cceb25246c2b90cd64e89f22b7d4ddc23e5b9db63a5827b4465a31614fa1c\": container with ID starting with c19cceb25246c2b90cd64e89f22b7d4ddc23e5b9db63a5827b4465a31614fa1c not found: ID does not exist" containerID="c19cceb25246c2b90cd64e89f22b7d4ddc23e5b9db63a5827b4465a31614fa1c" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.819238 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19cceb25246c2b90cd64e89f22b7d4ddc23e5b9db63a5827b4465a31614fa1c"} err="failed to get container status \"c19cceb25246c2b90cd64e89f22b7d4ddc23e5b9db63a5827b4465a31614fa1c\": rpc error: code = NotFound desc = could not find container \"c19cceb25246c2b90cd64e89f22b7d4ddc23e5b9db63a5827b4465a31614fa1c\": container with ID starting with c19cceb25246c2b90cd64e89f22b7d4ddc23e5b9db63a5827b4465a31614fa1c not found: ID does not exist" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.819261 4895 scope.go:117] "RemoveContainer" containerID="74abc32ad1460f0af3918a697b0c43a92eb5c6837ae538dcdbcfa55589f796f2" Jan 29 09:05:04 crc kubenswrapper[4895]: E0129 09:05:04.819632 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74abc32ad1460f0af3918a697b0c43a92eb5c6837ae538dcdbcfa55589f796f2\": container with ID starting with 74abc32ad1460f0af3918a697b0c43a92eb5c6837ae538dcdbcfa55589f796f2 not found: ID does not exist" containerID="74abc32ad1460f0af3918a697b0c43a92eb5c6837ae538dcdbcfa55589f796f2" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.819664 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74abc32ad1460f0af3918a697b0c43a92eb5c6837ae538dcdbcfa55589f796f2"} err="failed to get container status \"74abc32ad1460f0af3918a697b0c43a92eb5c6837ae538dcdbcfa55589f796f2\": rpc error: code = NotFound desc = could not find container \"74abc32ad1460f0af3918a697b0c43a92eb5c6837ae538dcdbcfa55589f796f2\": container with ID starting with 74abc32ad1460f0af3918a697b0c43a92eb5c6837ae538dcdbcfa55589f796f2 not found: ID does not exist" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.819686 4895 scope.go:117] "RemoveContainer" containerID="571acce083a56e2e6183efb9d84a23c2881ab9913fc8dfab14104083091392fd" Jan 29 09:05:04 crc kubenswrapper[4895]: E0129 09:05:04.820184 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"571acce083a56e2e6183efb9d84a23c2881ab9913fc8dfab14104083091392fd\": container with ID starting with 571acce083a56e2e6183efb9d84a23c2881ab9913fc8dfab14104083091392fd not found: ID does not exist" containerID="571acce083a56e2e6183efb9d84a23c2881ab9913fc8dfab14104083091392fd" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.820214 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"571acce083a56e2e6183efb9d84a23c2881ab9913fc8dfab14104083091392fd"} err="failed to get container status \"571acce083a56e2e6183efb9d84a23c2881ab9913fc8dfab14104083091392fd\": rpc error: code = NotFound desc = could not find container \"571acce083a56e2e6183efb9d84a23c2881ab9913fc8dfab14104083091392fd\": container with ID starting with 571acce083a56e2e6183efb9d84a23c2881ab9913fc8dfab14104083091392fd not found: ID does not exist" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.840756 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-config-data\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.840833 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnskt\" (UniqueName: \"kubernetes.io/projected/6dced459-73d7-4079-8450-1d22972197c0-kube-api-access-hnskt\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.840869 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dced459-73d7-4079-8450-1d22972197c0-run-httpd\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.840929 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.841063 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.841129 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.841196 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-scripts\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.841273 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dced459-73d7-4079-8450-1d22972197c0-log-httpd\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.909378 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-wqmlx"] Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.912327 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.915474 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.915680 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.925360 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wqmlx"] Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.943164 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-scripts\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.943238 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dced459-73d7-4079-8450-1d22972197c0-log-httpd\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.943344 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-config-data\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.943394 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnskt\" (UniqueName: \"kubernetes.io/projected/6dced459-73d7-4079-8450-1d22972197c0-kube-api-access-hnskt\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.943430 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dced459-73d7-4079-8450-1d22972197c0-run-httpd\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.943470 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.943550 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.943599 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.944830 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dced459-73d7-4079-8450-1d22972197c0-run-httpd\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.954091 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dced459-73d7-4079-8450-1d22972197c0-log-httpd\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.966794 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-config-data\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.966844 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.967829 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.968762 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.969222 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dced459-73d7-4079-8450-1d22972197c0-scripts\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:04 crc kubenswrapper[4895]: I0129 09:05:04.972640 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnskt\" (UniqueName: \"kubernetes.io/projected/6dced459-73d7-4079-8450-1d22972197c0-kube-api-access-hnskt\") pod \"ceilometer-0\" (UID: \"6dced459-73d7-4079-8450-1d22972197c0\") " pod="openstack/ceilometer-0" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.046477 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4s6w\" (UniqueName: \"kubernetes.io/projected/e7867aa6-5213-42fd-b3fd-592a74e6959e-kube-api-access-l4s6w\") pod \"nova-cell1-cell-mapping-wqmlx\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.046788 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wqmlx\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.046909 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-scripts\") pod \"nova-cell1-cell-mapping-wqmlx\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.047191 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-config-data\") pod \"nova-cell1-cell-mapping-wqmlx\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.106478 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.149620 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wqmlx\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.149718 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-scripts\") pod \"nova-cell1-cell-mapping-wqmlx\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.149826 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-config-data\") pod \"nova-cell1-cell-mapping-wqmlx\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.149997 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4s6w\" (UniqueName: \"kubernetes.io/projected/e7867aa6-5213-42fd-b3fd-592a74e6959e-kube-api-access-l4s6w\") pod \"nova-cell1-cell-mapping-wqmlx\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.157890 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-config-data\") pod \"nova-cell1-cell-mapping-wqmlx\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.159352 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-scripts\") pod \"nova-cell1-cell-mapping-wqmlx\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.163146 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wqmlx\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.172978 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4s6w\" (UniqueName: \"kubernetes.io/projected/e7867aa6-5213-42fd-b3fd-592a74e6959e-kube-api-access-l4s6w\") pod \"nova-cell1-cell-mapping-wqmlx\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.231870 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="470474b6-2796-4bac-b679-8dd1d9a043f2" path="/var/lib/kubelet/pods/470474b6-2796-4bac-b679-8dd1d9a043f2/volumes" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.233570 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.448150 4895 generic.go:334] "Generic (PLEG): container finished" podID="c3333219-55c5-4bbb-b48a-6f9e83849794" containerID="ec7ecae43c22ddb5a941e912ee95f35abf8ff2136de3fcf686d868302ea51ddf" exitCode=0 Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.450169 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3333219-55c5-4bbb-b48a-6f9e83849794","Type":"ContainerDied","Data":"ec7ecae43c22ddb5a941e912ee95f35abf8ff2136de3fcf686d868302ea51ddf"} Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.719019 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.838488 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.981769 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rkcw\" (UniqueName: \"kubernetes.io/projected/c3333219-55c5-4bbb-b48a-6f9e83849794-kube-api-access-7rkcw\") pod \"c3333219-55c5-4bbb-b48a-6f9e83849794\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.981960 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3333219-55c5-4bbb-b48a-6f9e83849794-config-data\") pod \"c3333219-55c5-4bbb-b48a-6f9e83849794\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.982228 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3333219-55c5-4bbb-b48a-6f9e83849794-logs\") pod \"c3333219-55c5-4bbb-b48a-6f9e83849794\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.982322 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3333219-55c5-4bbb-b48a-6f9e83849794-combined-ca-bundle\") pod \"c3333219-55c5-4bbb-b48a-6f9e83849794\" (UID: \"c3333219-55c5-4bbb-b48a-6f9e83849794\") " Jan 29 09:05:05 crc kubenswrapper[4895]: I0129 09:05:05.983384 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3333219-55c5-4bbb-b48a-6f9e83849794-logs" (OuterVolumeSpecName: "logs") pod "c3333219-55c5-4bbb-b48a-6f9e83849794" (UID: "c3333219-55c5-4bbb-b48a-6f9e83849794"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.008350 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3333219-55c5-4bbb-b48a-6f9e83849794-kube-api-access-7rkcw" (OuterVolumeSpecName: "kube-api-access-7rkcw") pod "c3333219-55c5-4bbb-b48a-6f9e83849794" (UID: "c3333219-55c5-4bbb-b48a-6f9e83849794"). InnerVolumeSpecName "kube-api-access-7rkcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.017938 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3333219-55c5-4bbb-b48a-6f9e83849794-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3333219-55c5-4bbb-b48a-6f9e83849794" (UID: "c3333219-55c5-4bbb-b48a-6f9e83849794"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.038295 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3333219-55c5-4bbb-b48a-6f9e83849794-config-data" (OuterVolumeSpecName: "config-data") pod "c3333219-55c5-4bbb-b48a-6f9e83849794" (UID: "c3333219-55c5-4bbb-b48a-6f9e83849794"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.052053 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wqmlx"] Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.087055 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3333219-55c5-4bbb-b48a-6f9e83849794-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.087116 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rkcw\" (UniqueName: \"kubernetes.io/projected/c3333219-55c5-4bbb-b48a-6f9e83849794-kube-api-access-7rkcw\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.087131 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3333219-55c5-4bbb-b48a-6f9e83849794-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.087171 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3333219-55c5-4bbb-b48a-6f9e83849794-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.464325 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wqmlx" event={"ID":"e7867aa6-5213-42fd-b3fd-592a74e6959e","Type":"ContainerStarted","Data":"e248ecf7514232be0d7ff76c58cbad6dc9f1c2bf4367a5acb67c7c40f4d465d4"} Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.464719 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wqmlx" event={"ID":"e7867aa6-5213-42fd-b3fd-592a74e6959e","Type":"ContainerStarted","Data":"6700033bf8ad9f8d166016e89a5d0bbab76a13d48809ec73b9e9f24fe648df1c"} Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.465832 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6dced459-73d7-4079-8450-1d22972197c0","Type":"ContainerStarted","Data":"2cb7a38f11d9a272963df800404bb788095f8fc7a38b880f05916956c7c13d66"} Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.468242 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3333219-55c5-4bbb-b48a-6f9e83849794","Type":"ContainerDied","Data":"e7e59123ccbed10bf3ad611eb657c3de2229194424f8005464aa5ec4fa6d00df"} Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.468321 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.468327 4895 scope.go:117] "RemoveContainer" containerID="ec7ecae43c22ddb5a941e912ee95f35abf8ff2136de3fcf686d868302ea51ddf" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.506682 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-wqmlx" podStartSLOduration=2.506648768 podStartE2EDuration="2.506648768s" podCreationTimestamp="2026-01-29 09:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:05:06.503404101 +0000 UTC m=+1448.144912247" watchObservedRunningTime="2026-01-29 09:05:06.506648768 +0000 UTC m=+1448.148156914" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.548463 4895 scope.go:117] "RemoveContainer" containerID="0e1afd25855d3fc44f5f8674b7e18630204b4d7ae42aef6c0de6085ddd18891e" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.566392 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.592592 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.612106 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 09:05:06 crc kubenswrapper[4895]: E0129 09:05:06.612796 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3333219-55c5-4bbb-b48a-6f9e83849794" containerName="nova-api-api" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.612841 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3333219-55c5-4bbb-b48a-6f9e83849794" containerName="nova-api-api" Jan 29 09:05:06 crc kubenswrapper[4895]: E0129 09:05:06.612877 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3333219-55c5-4bbb-b48a-6f9e83849794" containerName="nova-api-log" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.612886 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3333219-55c5-4bbb-b48a-6f9e83849794" containerName="nova-api-log" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.613174 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3333219-55c5-4bbb-b48a-6f9e83849794" containerName="nova-api-log" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.613208 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3333219-55c5-4bbb-b48a-6f9e83849794" containerName="nova-api-api" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.615771 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.622839 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.632124 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.632747 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.633326 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.733493 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9lkl\" (UniqueName: \"kubernetes.io/projected/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-kube-api-access-w9lkl\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.733605 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.733630 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-config-data\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.733651 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.734029 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-logs\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.734422 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-public-tls-certs\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.836703 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.836783 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-config-data\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.836817 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.836977 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-logs\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.837163 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-public-tls-certs\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.837222 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9lkl\" (UniqueName: \"kubernetes.io/projected/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-kube-api-access-w9lkl\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.838346 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-logs\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.860352 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.862511 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-config-data\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.862635 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.869976 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-public-tls-certs\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.873245 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9lkl\" (UniqueName: \"kubernetes.io/projected/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-kube-api-access-w9lkl\") pod \"nova-api-0\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " pod="openstack/nova-api-0" Jan 29 09:05:06 crc kubenswrapper[4895]: I0129 09:05:06.970894 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:05:07 crc kubenswrapper[4895]: I0129 09:05:07.258648 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3333219-55c5-4bbb-b48a-6f9e83849794" path="/var/lib/kubelet/pods/c3333219-55c5-4bbb-b48a-6f9e83849794/volumes" Jan 29 09:05:07 crc kubenswrapper[4895]: I0129 09:05:07.497798 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6dced459-73d7-4079-8450-1d22972197c0","Type":"ContainerStarted","Data":"ddd3049780091eeab383ac6341fa362624940ff8a85efcb7135b35cb75c98217"} Jan 29 09:05:07 crc kubenswrapper[4895]: I0129 09:05:07.498136 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6dced459-73d7-4079-8450-1d22972197c0","Type":"ContainerStarted","Data":"6ee593da229c312feb858e52fd380383b5e41f00a54b115fab0d6bf3783ee022"} Jan 29 09:05:07 crc kubenswrapper[4895]: I0129 09:05:07.565065 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:05:08 crc kubenswrapper[4895]: I0129 09:05:08.543392 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4","Type":"ContainerStarted","Data":"854e6f35fa79d8d126a6508e386d76e3c37bf65d757d170aeeb5ea3fe84b7277"} Jan 29 09:05:08 crc kubenswrapper[4895]: I0129 09:05:08.547690 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4","Type":"ContainerStarted","Data":"e27faa0d6f52132379d45d5b0f1839d5afe4d360cbab1d7513798df02cb234b1"} Jan 29 09:05:08 crc kubenswrapper[4895]: I0129 09:05:08.858505 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-82zv8" Jan 29 09:05:08 crc kubenswrapper[4895]: I0129 09:05:08.950481 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-kfhbt"] Jan 29 09:05:08 crc kubenswrapper[4895]: I0129 09:05:08.950823 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" podUID="abb41055-685a-4b58-85e5-fa877703ac61" containerName="dnsmasq-dns" containerID="cri-o://0253525c28771cdac812e23fecad9c126e34a839e6e88c655182982ca71f4d69" gracePeriod=10 Jan 29 09:05:09 crc kubenswrapper[4895]: I0129 09:05:09.557980 4895 generic.go:334] "Generic (PLEG): container finished" podID="abb41055-685a-4b58-85e5-fa877703ac61" containerID="0253525c28771cdac812e23fecad9c126e34a839e6e88c655182982ca71f4d69" exitCode=0 Jan 29 09:05:09 crc kubenswrapper[4895]: I0129 09:05:09.558018 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" event={"ID":"abb41055-685a-4b58-85e5-fa877703ac61","Type":"ContainerDied","Data":"0253525c28771cdac812e23fecad9c126e34a839e6e88c655182982ca71f4d69"} Jan 29 09:05:09 crc kubenswrapper[4895]: I0129 09:05:09.562598 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6dced459-73d7-4079-8450-1d22972197c0","Type":"ContainerStarted","Data":"0e50cb8b4b86d183f47bf95a2289f241a4325b84ebf5812baa681efe35c47bac"} Jan 29 09:05:09 crc kubenswrapper[4895]: I0129 09:05:09.564959 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4","Type":"ContainerStarted","Data":"7eb316dc1adff31e10c5f274eb7af19b970b86ba0b8d8f156f54e18d7bca8fc4"} Jan 29 09:05:09 crc kubenswrapper[4895]: I0129 09:05:09.606235 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.606178291 podStartE2EDuration="3.606178291s" podCreationTimestamp="2026-01-29 09:05:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:05:09.594060028 +0000 UTC m=+1451.235568184" watchObservedRunningTime="2026-01-29 09:05:09.606178291 +0000 UTC m=+1451.247686437" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.157850 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.276005 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hstqz\" (UniqueName: \"kubernetes.io/projected/abb41055-685a-4b58-85e5-fa877703ac61-kube-api-access-hstqz\") pod \"abb41055-685a-4b58-85e5-fa877703ac61\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.276405 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-dns-svc\") pod \"abb41055-685a-4b58-85e5-fa877703ac61\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.276454 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-config\") pod \"abb41055-685a-4b58-85e5-fa877703ac61\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.276500 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-ovsdbserver-nb\") pod \"abb41055-685a-4b58-85e5-fa877703ac61\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.276560 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-dns-swift-storage-0\") pod \"abb41055-685a-4b58-85e5-fa877703ac61\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.276633 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-ovsdbserver-sb\") pod \"abb41055-685a-4b58-85e5-fa877703ac61\" (UID: \"abb41055-685a-4b58-85e5-fa877703ac61\") " Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.311130 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb41055-685a-4b58-85e5-fa877703ac61-kube-api-access-hstqz" (OuterVolumeSpecName: "kube-api-access-hstqz") pod "abb41055-685a-4b58-85e5-fa877703ac61" (UID: "abb41055-685a-4b58-85e5-fa877703ac61"). InnerVolumeSpecName "kube-api-access-hstqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.345880 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-config" (OuterVolumeSpecName: "config") pod "abb41055-685a-4b58-85e5-fa877703ac61" (UID: "abb41055-685a-4b58-85e5-fa877703ac61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.352478 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "abb41055-685a-4b58-85e5-fa877703ac61" (UID: "abb41055-685a-4b58-85e5-fa877703ac61"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.370237 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "abb41055-685a-4b58-85e5-fa877703ac61" (UID: "abb41055-685a-4b58-85e5-fa877703ac61"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.372561 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "abb41055-685a-4b58-85e5-fa877703ac61" (UID: "abb41055-685a-4b58-85e5-fa877703ac61"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.382662 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.382712 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hstqz\" (UniqueName: \"kubernetes.io/projected/abb41055-685a-4b58-85e5-fa877703ac61-kube-api-access-hstqz\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.382729 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.382742 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.382754 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.398760 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "abb41055-685a-4b58-85e5-fa877703ac61" (UID: "abb41055-685a-4b58-85e5-fa877703ac61"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.485659 4895 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abb41055-685a-4b58-85e5-fa877703ac61-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.583900 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.588099 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-kfhbt" event={"ID":"abb41055-685a-4b58-85e5-fa877703ac61","Type":"ContainerDied","Data":"7ed73a603896f3db1809a3d69f2f59e4d973d53d54f8fd47fda23d206eeef5c1"} Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.588223 4895 scope.go:117] "RemoveContainer" containerID="0253525c28771cdac812e23fecad9c126e34a839e6e88c655182982ca71f4d69" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.625988 4895 scope.go:117] "RemoveContainer" containerID="b6a409fb3d9294727707dc73535dbf537020a477050622bf1695f01f36fef7cd" Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.635982 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-kfhbt"] Jan 29 09:05:10 crc kubenswrapper[4895]: I0129 09:05:10.649996 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-kfhbt"] Jan 29 09:05:11 crc kubenswrapper[4895]: I0129 09:05:11.233158 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abb41055-685a-4b58-85e5-fa877703ac61" path="/var/lib/kubelet/pods/abb41055-685a-4b58-85e5-fa877703ac61/volumes" Jan 29 09:05:11 crc kubenswrapper[4895]: I0129 09:05:11.605195 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6dced459-73d7-4079-8450-1d22972197c0","Type":"ContainerStarted","Data":"8a8461a7ba960a2daedcf4be9aa862fa09be440e3622c013ffe1c076eb2c62db"} Jan 29 09:05:11 crc kubenswrapper[4895]: I0129 09:05:11.605984 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 09:05:11 crc kubenswrapper[4895]: I0129 09:05:11.648030 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.456799887 podStartE2EDuration="7.648000859s" podCreationTimestamp="2026-01-29 09:05:04 +0000 UTC" firstStartedPulling="2026-01-29 09:05:05.72646021 +0000 UTC m=+1447.367968356" lastFinishedPulling="2026-01-29 09:05:10.917661182 +0000 UTC m=+1452.559169328" observedRunningTime="2026-01-29 09:05:11.63343081 +0000 UTC m=+1453.274938956" watchObservedRunningTime="2026-01-29 09:05:11.648000859 +0000 UTC m=+1453.289509005" Jan 29 09:05:13 crc kubenswrapper[4895]: I0129 09:05:13.642906 4895 generic.go:334] "Generic (PLEG): container finished" podID="e7867aa6-5213-42fd-b3fd-592a74e6959e" containerID="e248ecf7514232be0d7ff76c58cbad6dc9f1c2bf4367a5acb67c7c40f4d465d4" exitCode=0 Jan 29 09:05:13 crc kubenswrapper[4895]: I0129 09:05:13.643134 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wqmlx" event={"ID":"e7867aa6-5213-42fd-b3fd-592a74e6959e","Type":"ContainerDied","Data":"e248ecf7514232be0d7ff76c58cbad6dc9f1c2bf4367a5acb67c7c40f4d465d4"} Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.076630 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.211865 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4s6w\" (UniqueName: \"kubernetes.io/projected/e7867aa6-5213-42fd-b3fd-592a74e6959e-kube-api-access-l4s6w\") pod \"e7867aa6-5213-42fd-b3fd-592a74e6959e\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.212357 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-scripts\") pod \"e7867aa6-5213-42fd-b3fd-592a74e6959e\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.212404 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-config-data\") pod \"e7867aa6-5213-42fd-b3fd-592a74e6959e\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.212518 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-combined-ca-bundle\") pod \"e7867aa6-5213-42fd-b3fd-592a74e6959e\" (UID: \"e7867aa6-5213-42fd-b3fd-592a74e6959e\") " Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.238532 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-scripts" (OuterVolumeSpecName: "scripts") pod "e7867aa6-5213-42fd-b3fd-592a74e6959e" (UID: "e7867aa6-5213-42fd-b3fd-592a74e6959e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.245619 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-config-data" (OuterVolumeSpecName: "config-data") pod "e7867aa6-5213-42fd-b3fd-592a74e6959e" (UID: "e7867aa6-5213-42fd-b3fd-592a74e6959e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.258644 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7867aa6-5213-42fd-b3fd-592a74e6959e" (UID: "e7867aa6-5213-42fd-b3fd-592a74e6959e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.263369 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7867aa6-5213-42fd-b3fd-592a74e6959e-kube-api-access-l4s6w" (OuterVolumeSpecName: "kube-api-access-l4s6w") pod "e7867aa6-5213-42fd-b3fd-592a74e6959e" (UID: "e7867aa6-5213-42fd-b3fd-592a74e6959e"). InnerVolumeSpecName "kube-api-access-l4s6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.320893 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.320967 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.320983 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7867aa6-5213-42fd-b3fd-592a74e6959e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.321000 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4s6w\" (UniqueName: \"kubernetes.io/projected/e7867aa6-5213-42fd-b3fd-592a74e6959e-kube-api-access-l4s6w\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.664291 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wqmlx" event={"ID":"e7867aa6-5213-42fd-b3fd-592a74e6959e","Type":"ContainerDied","Data":"6700033bf8ad9f8d166016e89a5d0bbab76a13d48809ec73b9e9f24fe648df1c"} Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.664351 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6700033bf8ad9f8d166016e89a5d0bbab76a13d48809ec73b9e9f24fe648df1c" Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.664439 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wqmlx" Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.882987 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.883382 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" containerName="nova-api-log" containerID="cri-o://854e6f35fa79d8d126a6508e386d76e3c37bf65d757d170aeeb5ea3fe84b7277" gracePeriod=30 Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.883493 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" containerName="nova-api-api" containerID="cri-o://7eb316dc1adff31e10c5f274eb7af19b970b86ba0b8d8f156f54e18d7bca8fc4" gracePeriod=30 Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.913749 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.914173 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="74ef3ca3-2112-4d1c-b10f-4b758945253f" containerName="nova-scheduler-scheduler" containerID="cri-o://39430963f6521a41c42fbbb54c4a8d7756f3d578096a4b2182026b430ff2e09e" gracePeriod=30 Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.942300 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.942721 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" containerName="nova-metadata-log" containerID="cri-o://c8585f99646178a5bc12eb8475cad3bf5b6d88577af11d1e2b825d526855538b" gracePeriod=30 Jan 29 09:05:15 crc kubenswrapper[4895]: I0129 09:05:15.942817 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" containerName="nova-metadata-metadata" containerID="cri-o://c80c280f35505a58b77a6c3633478493128b81dd798cb4aae89814d83681521a" gracePeriod=30 Jan 29 09:05:16 crc kubenswrapper[4895]: I0129 09:05:16.682357 4895 generic.go:334] "Generic (PLEG): container finished" podID="f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" containerID="7eb316dc1adff31e10c5f274eb7af19b970b86ba0b8d8f156f54e18d7bca8fc4" exitCode=0 Jan 29 09:05:16 crc kubenswrapper[4895]: I0129 09:05:16.683538 4895 generic.go:334] "Generic (PLEG): container finished" podID="f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" containerID="854e6f35fa79d8d126a6508e386d76e3c37bf65d757d170aeeb5ea3fe84b7277" exitCode=143 Jan 29 09:05:16 crc kubenswrapper[4895]: I0129 09:05:16.682595 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4","Type":"ContainerDied","Data":"7eb316dc1adff31e10c5f274eb7af19b970b86ba0b8d8f156f54e18d7bca8fc4"} Jan 29 09:05:16 crc kubenswrapper[4895]: I0129 09:05:16.684041 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4","Type":"ContainerDied","Data":"854e6f35fa79d8d126a6508e386d76e3c37bf65d757d170aeeb5ea3fe84b7277"} Jan 29 09:05:16 crc kubenswrapper[4895]: I0129 09:05:16.688058 4895 generic.go:334] "Generic (PLEG): container finished" podID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" containerID="c8585f99646178a5bc12eb8475cad3bf5b6d88577af11d1e2b825d526855538b" exitCode=143 Jan 29 09:05:16 crc kubenswrapper[4895]: I0129 09:05:16.688168 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1a417ac6-d7a5-46e9-a456-f0a50beaa91d","Type":"ContainerDied","Data":"c8585f99646178a5bc12eb8475cad3bf5b6d88577af11d1e2b825d526855538b"} Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.255042 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.374073 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-logs\") pod \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.374475 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-combined-ca-bundle\") pod \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.374510 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-config-data\") pod \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.374562 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-public-tls-certs\") pod \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.374595 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9lkl\" (UniqueName: \"kubernetes.io/projected/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-kube-api-access-w9lkl\") pod \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.374631 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-internal-tls-certs\") pod \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\" (UID: \"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4\") " Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.382126 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-logs" (OuterVolumeSpecName: "logs") pod "f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" (UID: "f267ef02-82eb-4bee-a6e9-a8fecf6e89d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.389231 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-kube-api-access-w9lkl" (OuterVolumeSpecName: "kube-api-access-w9lkl") pod "f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" (UID: "f267ef02-82eb-4bee-a6e9-a8fecf6e89d4"). InnerVolumeSpecName "kube-api-access-w9lkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:05:17 crc kubenswrapper[4895]: E0129 09:05:17.412488 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="39430963f6521a41c42fbbb54c4a8d7756f3d578096a4b2182026b430ff2e09e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:05:17 crc kubenswrapper[4895]: E0129 09:05:17.427347 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="39430963f6521a41c42fbbb54c4a8d7756f3d578096a4b2182026b430ff2e09e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:05:17 crc kubenswrapper[4895]: E0129 09:05:17.429777 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="39430963f6521a41c42fbbb54c4a8d7756f3d578096a4b2182026b430ff2e09e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:05:17 crc kubenswrapper[4895]: E0129 09:05:17.430384 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="74ef3ca3-2112-4d1c-b10f-4b758945253f" containerName="nova-scheduler-scheduler" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.447059 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" (UID: "f267ef02-82eb-4bee-a6e9-a8fecf6e89d4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.451403 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" (UID: "f267ef02-82eb-4bee-a6e9-a8fecf6e89d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.466190 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-config-data" (OuterVolumeSpecName: "config-data") pod "f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" (UID: "f267ef02-82eb-4bee-a6e9-a8fecf6e89d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.477619 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.477666 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.477675 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.477689 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9lkl\" (UniqueName: \"kubernetes.io/projected/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-kube-api-access-w9lkl\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.477699 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.487462 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" (UID: "f267ef02-82eb-4bee-a6e9-a8fecf6e89d4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.580373 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.704174 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f267ef02-82eb-4bee-a6e9-a8fecf6e89d4","Type":"ContainerDied","Data":"e27faa0d6f52132379d45d5b0f1839d5afe4d360cbab1d7513798df02cb234b1"} Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.704249 4895 scope.go:117] "RemoveContainer" containerID="7eb316dc1adff31e10c5f274eb7af19b970b86ba0b8d8f156f54e18d7bca8fc4" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.704485 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.742784 4895 scope.go:117] "RemoveContainer" containerID="854e6f35fa79d8d126a6508e386d76e3c37bf65d757d170aeeb5ea3fe84b7277" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.768455 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.791984 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.814215 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 09:05:17 crc kubenswrapper[4895]: E0129 09:05:17.815009 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" containerName="nova-api-api" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.815039 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" containerName="nova-api-api" Jan 29 09:05:17 crc kubenswrapper[4895]: E0129 09:05:17.815071 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7867aa6-5213-42fd-b3fd-592a74e6959e" containerName="nova-manage" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.815080 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7867aa6-5213-42fd-b3fd-592a74e6959e" containerName="nova-manage" Jan 29 09:05:17 crc kubenswrapper[4895]: E0129 09:05:17.815099 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb41055-685a-4b58-85e5-fa877703ac61" containerName="dnsmasq-dns" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.815109 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb41055-685a-4b58-85e5-fa877703ac61" containerName="dnsmasq-dns" Jan 29 09:05:17 crc kubenswrapper[4895]: E0129 09:05:17.815136 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" containerName="nova-api-log" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.815145 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" containerName="nova-api-log" Jan 29 09:05:17 crc kubenswrapper[4895]: E0129 09:05:17.815158 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb41055-685a-4b58-85e5-fa877703ac61" containerName="init" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.815169 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb41055-685a-4b58-85e5-fa877703ac61" containerName="init" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.815432 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="abb41055-685a-4b58-85e5-fa877703ac61" containerName="dnsmasq-dns" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.815460 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" containerName="nova-api-api" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.815472 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" containerName="nova-api-log" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.815492 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7867aa6-5213-42fd-b3fd-592a74e6959e" containerName="nova-manage" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.817149 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.820422 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.820640 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.820516 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.828742 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.888952 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558cbc7f-9455-49b5-89aa-b898d468ca08-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.889019 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/558cbc7f-9455-49b5-89aa-b898d468ca08-internal-tls-certs\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.889070 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6tmb\" (UniqueName: \"kubernetes.io/projected/558cbc7f-9455-49b5-89aa-b898d468ca08-kube-api-access-z6tmb\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.889313 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/558cbc7f-9455-49b5-89aa-b898d468ca08-public-tls-certs\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.889366 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/558cbc7f-9455-49b5-89aa-b898d468ca08-logs\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.889438 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558cbc7f-9455-49b5-89aa-b898d468ca08-config-data\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.992679 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/558cbc7f-9455-49b5-89aa-b898d468ca08-public-tls-certs\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.992736 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/558cbc7f-9455-49b5-89aa-b898d468ca08-logs\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.992829 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558cbc7f-9455-49b5-89aa-b898d468ca08-config-data\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.992888 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558cbc7f-9455-49b5-89aa-b898d468ca08-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.992907 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/558cbc7f-9455-49b5-89aa-b898d468ca08-internal-tls-certs\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.992971 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6tmb\" (UniqueName: \"kubernetes.io/projected/558cbc7f-9455-49b5-89aa-b898d468ca08-kube-api-access-z6tmb\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.993496 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/558cbc7f-9455-49b5-89aa-b898d468ca08-logs\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.997396 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/558cbc7f-9455-49b5-89aa-b898d468ca08-internal-tls-certs\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:17 crc kubenswrapper[4895]: I0129 09:05:17.998843 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558cbc7f-9455-49b5-89aa-b898d468ca08-config-data\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:18 crc kubenswrapper[4895]: I0129 09:05:18.000615 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558cbc7f-9455-49b5-89aa-b898d468ca08-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:18 crc kubenswrapper[4895]: I0129 09:05:18.000738 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/558cbc7f-9455-49b5-89aa-b898d468ca08-public-tls-certs\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:18 crc kubenswrapper[4895]: I0129 09:05:18.020889 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6tmb\" (UniqueName: \"kubernetes.io/projected/558cbc7f-9455-49b5-89aa-b898d468ca08-kube-api-access-z6tmb\") pod \"nova-api-0\" (UID: \"558cbc7f-9455-49b5-89aa-b898d468ca08\") " pod="openstack/nova-api-0" Jan 29 09:05:18 crc kubenswrapper[4895]: I0129 09:05:18.141819 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:05:18 crc kubenswrapper[4895]: I0129 09:05:18.685564 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:05:18 crc kubenswrapper[4895]: W0129 09:05:18.694869 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod558cbc7f_9455_49b5_89aa_b898d468ca08.slice/crio-b1950749d56c3bd4ee952213500216896bd2ba1965309a84e8d77a62c254e623 WatchSource:0}: Error finding container b1950749d56c3bd4ee952213500216896bd2ba1965309a84e8d77a62c254e623: Status 404 returned error can't find the container with id b1950749d56c3bd4ee952213500216896bd2ba1965309a84e8d77a62c254e623 Jan 29 09:05:18 crc kubenswrapper[4895]: I0129 09:05:18.719645 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"558cbc7f-9455-49b5-89aa-b898d468ca08","Type":"ContainerStarted","Data":"b1950749d56c3bd4ee952213500216896bd2ba1965309a84e8d77a62c254e623"} Jan 29 09:05:18 crc kubenswrapper[4895]: I0129 09:05:18.726133 4895 generic.go:334] "Generic (PLEG): container finished" podID="74ef3ca3-2112-4d1c-b10f-4b758945253f" containerID="39430963f6521a41c42fbbb54c4a8d7756f3d578096a4b2182026b430ff2e09e" exitCode=0 Jan 29 09:05:18 crc kubenswrapper[4895]: I0129 09:05:18.726514 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"74ef3ca3-2112-4d1c-b10f-4b758945253f","Type":"ContainerDied","Data":"39430963f6521a41c42fbbb54c4a8d7756f3d578096a4b2182026b430ff2e09e"} Jan 29 09:05:18 crc kubenswrapper[4895]: I0129 09:05:18.726689 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"74ef3ca3-2112-4d1c-b10f-4b758945253f","Type":"ContainerDied","Data":"99d4be1539d7bed87df12209c78d6d9da980fcf8453c67708e1b83ceefdfa73c"} Jan 29 09:05:18 crc kubenswrapper[4895]: I0129 09:05:18.726845 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d4be1539d7bed87df12209c78d6d9da980fcf8453c67708e1b83ceefdfa73c" Jan 29 09:05:18 crc kubenswrapper[4895]: I0129 09:05:18.873643 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.016063 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dswjg\" (UniqueName: \"kubernetes.io/projected/74ef3ca3-2112-4d1c-b10f-4b758945253f-kube-api-access-dswjg\") pod \"74ef3ca3-2112-4d1c-b10f-4b758945253f\" (UID: \"74ef3ca3-2112-4d1c-b10f-4b758945253f\") " Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.016473 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ef3ca3-2112-4d1c-b10f-4b758945253f-config-data\") pod \"74ef3ca3-2112-4d1c-b10f-4b758945253f\" (UID: \"74ef3ca3-2112-4d1c-b10f-4b758945253f\") " Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.016548 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ef3ca3-2112-4d1c-b10f-4b758945253f-combined-ca-bundle\") pod \"74ef3ca3-2112-4d1c-b10f-4b758945253f\" (UID: \"74ef3ca3-2112-4d1c-b10f-4b758945253f\") " Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.021198 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74ef3ca3-2112-4d1c-b10f-4b758945253f-kube-api-access-dswjg" (OuterVolumeSpecName: "kube-api-access-dswjg") pod "74ef3ca3-2112-4d1c-b10f-4b758945253f" (UID: "74ef3ca3-2112-4d1c-b10f-4b758945253f"). InnerVolumeSpecName "kube-api-access-dswjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.045752 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74ef3ca3-2112-4d1c-b10f-4b758945253f-config-data" (OuterVolumeSpecName: "config-data") pod "74ef3ca3-2112-4d1c-b10f-4b758945253f" (UID: "74ef3ca3-2112-4d1c-b10f-4b758945253f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.057966 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74ef3ca3-2112-4d1c-b10f-4b758945253f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74ef3ca3-2112-4d1c-b10f-4b758945253f" (UID: "74ef3ca3-2112-4d1c-b10f-4b758945253f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.119773 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ef3ca3-2112-4d1c-b10f-4b758945253f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.119836 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ef3ca3-2112-4d1c-b10f-4b758945253f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.119855 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dswjg\" (UniqueName: \"kubernetes.io/projected/74ef3ca3-2112-4d1c-b10f-4b758945253f-kube-api-access-dswjg\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.245006 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": dial tcp 10.217.0.201:8775: connect: connection refused" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.252717 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": dial tcp 10.217.0.201:8775: connect: connection refused" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.265193 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f267ef02-82eb-4bee-a6e9-a8fecf6e89d4" path="/var/lib/kubelet/pods/f267ef02-82eb-4bee-a6e9-a8fecf6e89d4/volumes" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.599623 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.654400 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7wvh\" (UniqueName: \"kubernetes.io/projected/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-kube-api-access-h7wvh\") pod \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.654529 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-config-data\") pod \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.654596 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-nova-metadata-tls-certs\") pod \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.654627 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-logs\") pod \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.654696 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-combined-ca-bundle\") pod \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\" (UID: \"1a417ac6-d7a5-46e9-a456-f0a50beaa91d\") " Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.657335 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-logs" (OuterVolumeSpecName: "logs") pod "1a417ac6-d7a5-46e9-a456-f0a50beaa91d" (UID: "1a417ac6-d7a5-46e9-a456-f0a50beaa91d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.684309 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-kube-api-access-h7wvh" (OuterVolumeSpecName: "kube-api-access-h7wvh") pod "1a417ac6-d7a5-46e9-a456-f0a50beaa91d" (UID: "1a417ac6-d7a5-46e9-a456-f0a50beaa91d"). InnerVolumeSpecName "kube-api-access-h7wvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.694385 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a417ac6-d7a5-46e9-a456-f0a50beaa91d" (UID: "1a417ac6-d7a5-46e9-a456-f0a50beaa91d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.727163 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-config-data" (OuterVolumeSpecName: "config-data") pod "1a417ac6-d7a5-46e9-a456-f0a50beaa91d" (UID: "1a417ac6-d7a5-46e9-a456-f0a50beaa91d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.748603 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1a417ac6-d7a5-46e9-a456-f0a50beaa91d" (UID: "1a417ac6-d7a5-46e9-a456-f0a50beaa91d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.762641 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7wvh\" (UniqueName: \"kubernetes.io/projected/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-kube-api-access-h7wvh\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.762679 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.762691 4895 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.762701 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.762711 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a417ac6-d7a5-46e9-a456-f0a50beaa91d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.764424 4895 generic.go:334] "Generic (PLEG): container finished" podID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" containerID="c80c280f35505a58b77a6c3633478493128b81dd798cb4aae89814d83681521a" exitCode=0 Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.764533 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1a417ac6-d7a5-46e9-a456-f0a50beaa91d","Type":"ContainerDied","Data":"c80c280f35505a58b77a6c3633478493128b81dd798cb4aae89814d83681521a"} Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.764568 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1a417ac6-d7a5-46e9-a456-f0a50beaa91d","Type":"ContainerDied","Data":"bbe954eac31c0db31f78602e8a9703bad2df636dd14c32908bc60d2c40ff5614"} Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.764560 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.764611 4895 scope.go:117] "RemoveContainer" containerID="c80c280f35505a58b77a6c3633478493128b81dd798cb4aae89814d83681521a" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.783518 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.784281 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"558cbc7f-9455-49b5-89aa-b898d468ca08","Type":"ContainerStarted","Data":"f9e0c374bb397d503aef95266d81d543706cdb43480f7451553ec5b83c560c7d"} Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.795721 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"558cbc7f-9455-49b5-89aa-b898d468ca08","Type":"ContainerStarted","Data":"c633712ce43a7ece9ce21fecd0235cd5affbb70709adfc215c0656b2e902fac3"} Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.823756 4895 scope.go:117] "RemoveContainer" containerID="c8585f99646178a5bc12eb8475cad3bf5b6d88577af11d1e2b825d526855538b" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.836988 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.872291 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.883028 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:05:19 crc kubenswrapper[4895]: E0129 09:05:19.883846 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" containerName="nova-metadata-log" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.883876 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" containerName="nova-metadata-log" Jan 29 09:05:19 crc kubenswrapper[4895]: E0129 09:05:19.883948 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ef3ca3-2112-4d1c-b10f-4b758945253f" containerName="nova-scheduler-scheduler" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.883965 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ef3ca3-2112-4d1c-b10f-4b758945253f" containerName="nova-scheduler-scheduler" Jan 29 09:05:19 crc kubenswrapper[4895]: E0129 09:05:19.883980 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" containerName="nova-metadata-metadata" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.883988 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" containerName="nova-metadata-metadata" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.884261 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="74ef3ca3-2112-4d1c-b10f-4b758945253f" containerName="nova-scheduler-scheduler" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.884284 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" containerName="nova-metadata-metadata" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.884298 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" containerName="nova-metadata-log" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.885260 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.885236426 podStartE2EDuration="2.885236426s" podCreationTimestamp="2026-01-29 09:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:05:19.842969278 +0000 UTC m=+1461.484477434" watchObservedRunningTime="2026-01-29 09:05:19.885236426 +0000 UTC m=+1461.526744572" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.885837 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.894099 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.894383 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.926093 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.949106 4895 scope.go:117] "RemoveContainer" containerID="c80c280f35505a58b77a6c3633478493128b81dd798cb4aae89814d83681521a" Jan 29 09:05:19 crc kubenswrapper[4895]: E0129 09:05:19.949812 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c80c280f35505a58b77a6c3633478493128b81dd798cb4aae89814d83681521a\": container with ID starting with c80c280f35505a58b77a6c3633478493128b81dd798cb4aae89814d83681521a not found: ID does not exist" containerID="c80c280f35505a58b77a6c3633478493128b81dd798cb4aae89814d83681521a" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.949877 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80c280f35505a58b77a6c3633478493128b81dd798cb4aae89814d83681521a"} err="failed to get container status \"c80c280f35505a58b77a6c3633478493128b81dd798cb4aae89814d83681521a\": rpc error: code = NotFound desc = could not find container \"c80c280f35505a58b77a6c3633478493128b81dd798cb4aae89814d83681521a\": container with ID starting with c80c280f35505a58b77a6c3633478493128b81dd798cb4aae89814d83681521a not found: ID does not exist" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.949937 4895 scope.go:117] "RemoveContainer" containerID="c8585f99646178a5bc12eb8475cad3bf5b6d88577af11d1e2b825d526855538b" Jan 29 09:05:19 crc kubenswrapper[4895]: E0129 09:05:19.951733 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8585f99646178a5bc12eb8475cad3bf5b6d88577af11d1e2b825d526855538b\": container with ID starting with c8585f99646178a5bc12eb8475cad3bf5b6d88577af11d1e2b825d526855538b not found: ID does not exist" containerID="c8585f99646178a5bc12eb8475cad3bf5b6d88577af11d1e2b825d526855538b" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.951766 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8585f99646178a5bc12eb8475cad3bf5b6d88577af11d1e2b825d526855538b"} err="failed to get container status \"c8585f99646178a5bc12eb8475cad3bf5b6d88577af11d1e2b825d526855538b\": rpc error: code = NotFound desc = could not find container \"c8585f99646178a5bc12eb8475cad3bf5b6d88577af11d1e2b825d526855538b\": container with ID starting with c8585f99646178a5bc12eb8475cad3bf5b6d88577af11d1e2b825d526855538b not found: ID does not exist" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.962200 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.977133 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.989783 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.991968 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:05:19 crc kubenswrapper[4895]: I0129 09:05:19.995249 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.023298 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.074904 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2e13290-5cda-49ac-9efd-5e8a72da76b6-logs\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.075050 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e13290-5cda-49ac-9efd-5e8a72da76b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.075083 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2rp6\" (UniqueName: \"kubernetes.io/projected/a2e13290-5cda-49ac-9efd-5e8a72da76b6-kube-api-access-k2rp6\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.075393 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e13290-5cda-49ac-9efd-5e8a72da76b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.075661 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e13290-5cda-49ac-9efd-5e8a72da76b6-config-data\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.180119 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216aa652-e284-4fb8-90bf-d975cc19d1f0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"216aa652-e284-4fb8-90bf-d975cc19d1f0\") " pod="openstack/nova-scheduler-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.180194 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2e13290-5cda-49ac-9efd-5e8a72da76b6-logs\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.180508 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e13290-5cda-49ac-9efd-5e8a72da76b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.180591 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2rp6\" (UniqueName: \"kubernetes.io/projected/a2e13290-5cda-49ac-9efd-5e8a72da76b6-kube-api-access-k2rp6\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.180828 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2e13290-5cda-49ac-9efd-5e8a72da76b6-logs\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.180830 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e13290-5cda-49ac-9efd-5e8a72da76b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.180985 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mtbq\" (UniqueName: \"kubernetes.io/projected/216aa652-e284-4fb8-90bf-d975cc19d1f0-kube-api-access-7mtbq\") pod \"nova-scheduler-0\" (UID: \"216aa652-e284-4fb8-90bf-d975cc19d1f0\") " pod="openstack/nova-scheduler-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.181027 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216aa652-e284-4fb8-90bf-d975cc19d1f0-config-data\") pod \"nova-scheduler-0\" (UID: \"216aa652-e284-4fb8-90bf-d975cc19d1f0\") " pod="openstack/nova-scheduler-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.181074 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e13290-5cda-49ac-9efd-5e8a72da76b6-config-data\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.187798 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e13290-5cda-49ac-9efd-5e8a72da76b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.189580 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e13290-5cda-49ac-9efd-5e8a72da76b6-config-data\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.192281 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e13290-5cda-49ac-9efd-5e8a72da76b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.207703 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2rp6\" (UniqueName: \"kubernetes.io/projected/a2e13290-5cda-49ac-9efd-5e8a72da76b6-kube-api-access-k2rp6\") pod \"nova-metadata-0\" (UID: \"a2e13290-5cda-49ac-9efd-5e8a72da76b6\") " pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.242104 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.284686 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216aa652-e284-4fb8-90bf-d975cc19d1f0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"216aa652-e284-4fb8-90bf-d975cc19d1f0\") " pod="openstack/nova-scheduler-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.285173 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mtbq\" (UniqueName: \"kubernetes.io/projected/216aa652-e284-4fb8-90bf-d975cc19d1f0-kube-api-access-7mtbq\") pod \"nova-scheduler-0\" (UID: \"216aa652-e284-4fb8-90bf-d975cc19d1f0\") " pod="openstack/nova-scheduler-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.285227 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216aa652-e284-4fb8-90bf-d975cc19d1f0-config-data\") pod \"nova-scheduler-0\" (UID: \"216aa652-e284-4fb8-90bf-d975cc19d1f0\") " pod="openstack/nova-scheduler-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.290313 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216aa652-e284-4fb8-90bf-d975cc19d1f0-config-data\") pod \"nova-scheduler-0\" (UID: \"216aa652-e284-4fb8-90bf-d975cc19d1f0\") " pod="openstack/nova-scheduler-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.292096 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216aa652-e284-4fb8-90bf-d975cc19d1f0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"216aa652-e284-4fb8-90bf-d975cc19d1f0\") " pod="openstack/nova-scheduler-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.308832 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mtbq\" (UniqueName: \"kubernetes.io/projected/216aa652-e284-4fb8-90bf-d975cc19d1f0-kube-api-access-7mtbq\") pod \"nova-scheduler-0\" (UID: \"216aa652-e284-4fb8-90bf-d975cc19d1f0\") " pod="openstack/nova-scheduler-0" Jan 29 09:05:20 crc kubenswrapper[4895]: I0129 09:05:20.318096 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:05:21 crc kubenswrapper[4895]: W0129 09:05:20.831311 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2e13290_5cda_49ac_9efd_5e8a72da76b6.slice/crio-a0da3ddbcd78ed43a6c49a94bdd76fe17149dcac4641be17bbd5eda9fab00888 WatchSource:0}: Error finding container a0da3ddbcd78ed43a6c49a94bdd76fe17149dcac4641be17bbd5eda9fab00888: Status 404 returned error can't find the container with id a0da3ddbcd78ed43a6c49a94bdd76fe17149dcac4641be17bbd5eda9fab00888 Jan 29 09:05:21 crc kubenswrapper[4895]: I0129 09:05:20.837026 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:05:21 crc kubenswrapper[4895]: I0129 09:05:20.910030 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:05:21 crc kubenswrapper[4895]: W0129 09:05:20.927012 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod216aa652_e284_4fb8_90bf_d975cc19d1f0.slice/crio-79e568fde8bcd88a15937d3e094ea831a6b09a9486244408b4db0d3fa6b6501c WatchSource:0}: Error finding container 79e568fde8bcd88a15937d3e094ea831a6b09a9486244408b4db0d3fa6b6501c: Status 404 returned error can't find the container with id 79e568fde8bcd88a15937d3e094ea831a6b09a9486244408b4db0d3fa6b6501c Jan 29 09:05:21 crc kubenswrapper[4895]: I0129 09:05:21.358651 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a417ac6-d7a5-46e9-a456-f0a50beaa91d" path="/var/lib/kubelet/pods/1a417ac6-d7a5-46e9-a456-f0a50beaa91d/volumes" Jan 29 09:05:21 crc kubenswrapper[4895]: I0129 09:05:21.372265 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74ef3ca3-2112-4d1c-b10f-4b758945253f" path="/var/lib/kubelet/pods/74ef3ca3-2112-4d1c-b10f-4b758945253f/volumes" Jan 29 09:05:21 crc kubenswrapper[4895]: I0129 09:05:21.819813 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"216aa652-e284-4fb8-90bf-d975cc19d1f0","Type":"ContainerStarted","Data":"d7ef80d565b00c527c8fa8a74a578ba2ae012668454a5737354f72986f90e548"} Jan 29 09:05:21 crc kubenswrapper[4895]: I0129 09:05:21.819879 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"216aa652-e284-4fb8-90bf-d975cc19d1f0","Type":"ContainerStarted","Data":"79e568fde8bcd88a15937d3e094ea831a6b09a9486244408b4db0d3fa6b6501c"} Jan 29 09:05:21 crc kubenswrapper[4895]: I0129 09:05:21.823551 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2e13290-5cda-49ac-9efd-5e8a72da76b6","Type":"ContainerStarted","Data":"2ea866f029a0733f8df0e91366c5d1882539b60732799fd4950424c664cb0dd1"} Jan 29 09:05:21 crc kubenswrapper[4895]: I0129 09:05:21.823597 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2e13290-5cda-49ac-9efd-5e8a72da76b6","Type":"ContainerStarted","Data":"ac3afa09270b4babca09fd7e1e2bcd2f20c46dc1b7787e5591ac4248319c1ec0"} Jan 29 09:05:21 crc kubenswrapper[4895]: I0129 09:05:21.823612 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2e13290-5cda-49ac-9efd-5e8a72da76b6","Type":"ContainerStarted","Data":"a0da3ddbcd78ed43a6c49a94bdd76fe17149dcac4641be17bbd5eda9fab00888"} Jan 29 09:05:21 crc kubenswrapper[4895]: I0129 09:05:21.850012 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.849976696 podStartE2EDuration="2.849976696s" podCreationTimestamp="2026-01-29 09:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:05:21.839550858 +0000 UTC m=+1463.481059004" watchObservedRunningTime="2026-01-29 09:05:21.849976696 +0000 UTC m=+1463.491484842" Jan 29 09:05:21 crc kubenswrapper[4895]: I0129 09:05:21.889071 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.889044769 podStartE2EDuration="2.889044769s" podCreationTimestamp="2026-01-29 09:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:05:21.863453836 +0000 UTC m=+1463.504961982" watchObservedRunningTime="2026-01-29 09:05:21.889044769 +0000 UTC m=+1463.530552905" Jan 29 09:05:25 crc kubenswrapper[4895]: I0129 09:05:25.242724 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:05:25 crc kubenswrapper[4895]: I0129 09:05:25.243167 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:05:25 crc kubenswrapper[4895]: I0129 09:05:25.319409 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 09:05:28 crc kubenswrapper[4895]: I0129 09:05:28.142285 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:05:28 crc kubenswrapper[4895]: I0129 09:05:28.146194 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:05:29 crc kubenswrapper[4895]: I0129 09:05:29.159325 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="558cbc7f-9455-49b5-89aa-b898d468ca08" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:05:29 crc kubenswrapper[4895]: I0129 09:05:29.159407 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="558cbc7f-9455-49b5-89aa-b898d468ca08" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.212:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:05:30 crc kubenswrapper[4895]: I0129 09:05:30.242724 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 09:05:30 crc kubenswrapper[4895]: I0129 09:05:30.243202 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 09:05:30 crc kubenswrapper[4895]: I0129 09:05:30.319475 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 09:05:30 crc kubenswrapper[4895]: I0129 09:05:30.352548 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 09:05:30 crc kubenswrapper[4895]: I0129 09:05:30.955975 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 09:05:31 crc kubenswrapper[4895]: I0129 09:05:31.256535 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a2e13290-5cda-49ac-9efd-5e8a72da76b6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:05:31 crc kubenswrapper[4895]: I0129 09:05:31.256620 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a2e13290-5cda-49ac-9efd-5e8a72da76b6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:05:35 crc kubenswrapper[4895]: I0129 09:05:35.121282 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 09:05:38 crc kubenswrapper[4895]: I0129 09:05:38.152609 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 09:05:38 crc kubenswrapper[4895]: I0129 09:05:38.153770 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 09:05:38 crc kubenswrapper[4895]: I0129 09:05:38.160686 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 09:05:38 crc kubenswrapper[4895]: I0129 09:05:38.168567 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 09:05:39 crc kubenswrapper[4895]: I0129 09:05:39.022409 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 09:05:39 crc kubenswrapper[4895]: I0129 09:05:39.030271 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 09:05:40 crc kubenswrapper[4895]: I0129 09:05:40.249272 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 09:05:40 crc kubenswrapper[4895]: I0129 09:05:40.254000 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 09:05:40 crc kubenswrapper[4895]: I0129 09:05:40.259170 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 09:05:41 crc kubenswrapper[4895]: I0129 09:05:41.057525 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 09:05:50 crc kubenswrapper[4895]: I0129 09:05:50.101094 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:05:51 crc kubenswrapper[4895]: I0129 09:05:51.399172 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:05:55 crc kubenswrapper[4895]: I0129 09:05:55.901541 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" containerName="rabbitmq" containerID="cri-o://78b578d09bb9a0244c465780f9b1e9a302262947c9504c01f4ba2604b679e677" gracePeriod=604795 Jan 29 09:05:56 crc kubenswrapper[4895]: I0129 09:05:56.095413 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="cbcad4af-7c93-4d6e-b825-42a586db5d81" containerName="rabbitmq" containerID="cri-o://a79ddc6cc9a8081dcca315fbaf6560ed3ec63f0c7c48656d13a4540ecbf048bd" gracePeriod=604796 Jan 29 09:05:58 crc kubenswrapper[4895]: I0129 09:05:58.451520 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="cbcad4af-7c93-4d6e-b825-42a586db5d81" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.383019 4895 generic.go:334] "Generic (PLEG): container finished" podID="cbcad4af-7c93-4d6e-b825-42a586db5d81" containerID="a79ddc6cc9a8081dcca315fbaf6560ed3ec63f0c7c48656d13a4540ecbf048bd" exitCode=0 Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.383096 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cbcad4af-7c93-4d6e-b825-42a586db5d81","Type":"ContainerDied","Data":"a79ddc6cc9a8081dcca315fbaf6560ed3ec63f0c7c48656d13a4540ecbf048bd"} Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.387185 4895 generic.go:334] "Generic (PLEG): container finished" podID="7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" containerID="78b578d09bb9a0244c465780f9b1e9a302262947c9504c01f4ba2604b679e677" exitCode=0 Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.387223 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5","Type":"ContainerDied","Data":"78b578d09bb9a0244c465780f9b1e9a302262947c9504c01f4ba2604b679e677"} Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.556768 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.650060 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-plugins\") pod \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.650122 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-confd\") pod \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.650218 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-erlang-cookie-secret\") pod \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.650244 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-plugins-conf\") pod \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.650314 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-tls\") pod \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.650357 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.650412 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-pod-info\") pod \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.650443 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-erlang-cookie\") pod \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.650568 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-server-conf\") pod \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.650659 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntw6t\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-kube-api-access-ntw6t\") pod \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.650731 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-config-data\") pod \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\" (UID: \"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.656267 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" (UID: "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.661057 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-pod-info" (OuterVolumeSpecName: "pod-info") pod "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" (UID: "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.661102 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" (UID: "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.663553 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" (UID: "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.668031 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-kube-api-access-ntw6t" (OuterVolumeSpecName: "kube-api-access-ntw6t") pod "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" (UID: "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5"). InnerVolumeSpecName "kube-api-access-ntw6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.668552 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" (UID: "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.684944 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" (UID: "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.702140 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" (UID: "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.752963 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-config-data" (OuterVolumeSpecName: "config-data") pod "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" (UID: "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.754869 4895 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.754892 4895 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.754902 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.755089 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.755109 4895 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.755124 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.755140 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntw6t\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-kube-api-access-ntw6t\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.755157 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.755168 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.758102 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.793565 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-server-conf" (OuterVolumeSpecName: "server-conf") pod "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" (UID: "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.821181 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.833054 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" (UID: "7d3ea6f8-e1cd-41fe-8169-00fc80c995b5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.857020 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cbcad4af-7c93-4d6e-b825-42a586db5d81\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.857154 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cbcad4af-7c93-4d6e-b825-42a586db5d81-erlang-cookie-secret\") pod \"cbcad4af-7c93-4d6e-b825-42a586db5d81\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.857303 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-config-data\") pod \"cbcad4af-7c93-4d6e-b825-42a586db5d81\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.857345 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-erlang-cookie\") pod \"cbcad4af-7c93-4d6e-b825-42a586db5d81\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.857685 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-server-conf\") pod \"cbcad4af-7c93-4d6e-b825-42a586db5d81\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.857732 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2wvv\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-kube-api-access-q2wvv\") pod \"cbcad4af-7c93-4d6e-b825-42a586db5d81\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.857777 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cbcad4af-7c93-4d6e-b825-42a586db5d81-pod-info\") pod \"cbcad4af-7c93-4d6e-b825-42a586db5d81\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.857815 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-tls\") pod \"cbcad4af-7c93-4d6e-b825-42a586db5d81\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.857843 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-plugins\") pod \"cbcad4af-7c93-4d6e-b825-42a586db5d81\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.857900 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-plugins-conf\") pod \"cbcad4af-7c93-4d6e-b825-42a586db5d81\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.857978 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-confd\") pod \"cbcad4af-7c93-4d6e-b825-42a586db5d81\" (UID: \"cbcad4af-7c93-4d6e-b825-42a586db5d81\") " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.858752 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.858779 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.858792 4895 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.862804 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cbcad4af-7c93-4d6e-b825-42a586db5d81" (UID: "cbcad4af-7c93-4d6e-b825-42a586db5d81"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.863491 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cbcad4af-7c93-4d6e-b825-42a586db5d81" (UID: "cbcad4af-7c93-4d6e-b825-42a586db5d81"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.865334 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cbcad4af-7c93-4d6e-b825-42a586db5d81" (UID: "cbcad4af-7c93-4d6e-b825-42a586db5d81"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.865982 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "cbcad4af-7c93-4d6e-b825-42a586db5d81" (UID: "cbcad4af-7c93-4d6e-b825-42a586db5d81"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.872561 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-kube-api-access-q2wvv" (OuterVolumeSpecName: "kube-api-access-q2wvv") pod "cbcad4af-7c93-4d6e-b825-42a586db5d81" (UID: "cbcad4af-7c93-4d6e-b825-42a586db5d81"). InnerVolumeSpecName "kube-api-access-q2wvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.874599 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcad4af-7c93-4d6e-b825-42a586db5d81-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cbcad4af-7c93-4d6e-b825-42a586db5d81" (UID: "cbcad4af-7c93-4d6e-b825-42a586db5d81"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.885788 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cbcad4af-7c93-4d6e-b825-42a586db5d81" (UID: "cbcad4af-7c93-4d6e-b825-42a586db5d81"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.885955 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cbcad4af-7c93-4d6e-b825-42a586db5d81-pod-info" (OuterVolumeSpecName: "pod-info") pod "cbcad4af-7c93-4d6e-b825-42a586db5d81" (UID: "cbcad4af-7c93-4d6e-b825-42a586db5d81"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.913879 4895 scope.go:117] "RemoveContainer" containerID="7c3e0fe6ef1bed526f92c62b23d0efd1cc2b74bb08f91fe399d1c4d8dcb612a5" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.924630 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-config-data" (OuterVolumeSpecName: "config-data") pod "cbcad4af-7c93-4d6e-b825-42a586db5d81" (UID: "cbcad4af-7c93-4d6e-b825-42a586db5d81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.947208 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-server-conf" (OuterVolumeSpecName: "server-conf") pod "cbcad4af-7c93-4d6e-b825-42a586db5d81" (UID: "cbcad4af-7c93-4d6e-b825-42a586db5d81"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.955472 4895 scope.go:117] "RemoveContainer" containerID="26d34178f24362025be3c60472a15a6f3b96f11f999bca0c1b399079c33299d8" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.962063 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2wvv\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-kube-api-access-q2wvv\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.962118 4895 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cbcad4af-7c93-4d6e-b825-42a586db5d81-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.962131 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.962143 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.962156 4895 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.962203 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.962217 4895 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cbcad4af-7c93-4d6e-b825-42a586db5d81-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.962230 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.962244 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.962256 4895 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cbcad4af-7c93-4d6e-b825-42a586db5d81-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:02 crc kubenswrapper[4895]: I0129 09:06:02.987481 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.005744 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cbcad4af-7c93-4d6e-b825-42a586db5d81" (UID: "cbcad4af-7c93-4d6e-b825-42a586db5d81"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.065183 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.065246 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cbcad4af-7c93-4d6e-b825-42a586db5d81-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.399242 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d3ea6f8-e1cd-41fe-8169-00fc80c995b5","Type":"ContainerDied","Data":"cec0643400228dbfcee52a5e2739034397e89c08765866ff8cd986ac1b1ce4db"} Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.399307 4895 scope.go:117] "RemoveContainer" containerID="78b578d09bb9a0244c465780f9b1e9a302262947c9504c01f4ba2604b679e677" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.399466 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.404151 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cbcad4af-7c93-4d6e-b825-42a586db5d81","Type":"ContainerDied","Data":"3fb13d0621aed7e5fc30264d2ec0a15b09d9ea7753e5aea5b2ee3a86d4d6ea94"} Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.404217 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.436018 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.450835 4895 scope.go:117] "RemoveContainer" containerID="a79ddc6cc9a8081dcca315fbaf6560ed3ec63f0c7c48656d13a4540ecbf048bd" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.460522 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.495159 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.514865 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.524737 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:06:03 crc kubenswrapper[4895]: E0129 09:06:03.525482 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbcad4af-7c93-4d6e-b825-42a586db5d81" containerName="setup-container" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.525521 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbcad4af-7c93-4d6e-b825-42a586db5d81" containerName="setup-container" Jan 29 09:06:03 crc kubenswrapper[4895]: E0129 09:06:03.525553 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbcad4af-7c93-4d6e-b825-42a586db5d81" containerName="rabbitmq" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.525562 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbcad4af-7c93-4d6e-b825-42a586db5d81" containerName="rabbitmq" Jan 29 09:06:03 crc kubenswrapper[4895]: E0129 09:06:03.525586 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" containerName="setup-container" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.525595 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" containerName="setup-container" Jan 29 09:06:03 crc kubenswrapper[4895]: E0129 09:06:03.525612 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" containerName="rabbitmq" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.525619 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" containerName="rabbitmq" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.525889 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" containerName="rabbitmq" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.525939 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbcad4af-7c93-4d6e-b825-42a586db5d81" containerName="rabbitmq" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.527473 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.532639 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.532679 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.532872 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.532985 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-grh7r" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.533067 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.533179 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.536082 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.536141 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.575294 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.575901 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.580930 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.582766 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.583049 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.583274 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.583428 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.583484 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.583437 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dxlcp" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.592764 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.686880 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687019 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687057 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687099 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb202ed2-1680-4411-83d3-4dcfdc317ac9-config-data\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687127 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb202ed2-1680-4411-83d3-4dcfdc317ac9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687144 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb202ed2-1680-4411-83d3-4dcfdc317ac9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687170 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687196 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb202ed2-1680-4411-83d3-4dcfdc317ac9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687216 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb202ed2-1680-4411-83d3-4dcfdc317ac9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687242 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687277 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687296 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687324 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb202ed2-1680-4411-83d3-4dcfdc317ac9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687360 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb202ed2-1680-4411-83d3-4dcfdc317ac9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687387 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687414 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687434 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb202ed2-1680-4411-83d3-4dcfdc317ac9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687452 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687485 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxq57\" (UniqueName: \"kubernetes.io/projected/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-kube-api-access-rxq57\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687512 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5l4t\" (UniqueName: \"kubernetes.io/projected/fb202ed2-1680-4411-83d3-4dcfdc317ac9-kube-api-access-z5l4t\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687548 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb202ed2-1680-4411-83d3-4dcfdc317ac9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.687570 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789062 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxq57\" (UniqueName: \"kubernetes.io/projected/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-kube-api-access-rxq57\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789152 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5l4t\" (UniqueName: \"kubernetes.io/projected/fb202ed2-1680-4411-83d3-4dcfdc317ac9-kube-api-access-z5l4t\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789193 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789215 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb202ed2-1680-4411-83d3-4dcfdc317ac9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789255 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789299 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789325 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789348 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb202ed2-1680-4411-83d3-4dcfdc317ac9-config-data\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789368 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb202ed2-1680-4411-83d3-4dcfdc317ac9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789389 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb202ed2-1680-4411-83d3-4dcfdc317ac9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789416 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789436 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb202ed2-1680-4411-83d3-4dcfdc317ac9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789456 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb202ed2-1680-4411-83d3-4dcfdc317ac9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789482 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789509 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789526 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789559 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb202ed2-1680-4411-83d3-4dcfdc317ac9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789595 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb202ed2-1680-4411-83d3-4dcfdc317ac9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789627 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789657 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789681 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb202ed2-1680-4411-83d3-4dcfdc317ac9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.789702 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.790202 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.790316 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.790792 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb202ed2-1680-4411-83d3-4dcfdc317ac9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.791428 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.791900 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.792783 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.793780 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.794159 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb202ed2-1680-4411-83d3-4dcfdc317ac9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.794241 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb202ed2-1680-4411-83d3-4dcfdc317ac9-config-data\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.794790 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.795013 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb202ed2-1680-4411-83d3-4dcfdc317ac9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.796424 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb202ed2-1680-4411-83d3-4dcfdc317ac9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.799370 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.799389 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb202ed2-1680-4411-83d3-4dcfdc317ac9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.800183 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.800335 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.800478 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.802738 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb202ed2-1680-4411-83d3-4dcfdc317ac9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.807955 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb202ed2-1680-4411-83d3-4dcfdc317ac9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.808182 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb202ed2-1680-4411-83d3-4dcfdc317ac9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.810003 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxq57\" (UniqueName: \"kubernetes.io/projected/b1ce25b0-0fc4-4560-88ba-ee5261d106e9-kube-api-access-rxq57\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.815182 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5l4t\" (UniqueName: \"kubernetes.io/projected/fb202ed2-1680-4411-83d3-4dcfdc317ac9-kube-api-access-z5l4t\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.833847 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b1ce25b0-0fc4-4560-88ba-ee5261d106e9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.842109 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"fb202ed2-1680-4411-83d3-4dcfdc317ac9\") " pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.875122 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 09:06:03 crc kubenswrapper[4895]: I0129 09:06:03.920513 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:04 crc kubenswrapper[4895]: I0129 09:06:04.306031 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 09:06:04 crc kubenswrapper[4895]: I0129 09:06:04.425223 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fb202ed2-1680-4411-83d3-4dcfdc317ac9","Type":"ContainerStarted","Data":"5dfb10e384d70daadc688f3aa5d5018825d4d3e3c2b1c78e3c5da29cba0d9af6"} Jan 29 09:06:04 crc kubenswrapper[4895]: I0129 09:06:04.489624 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 09:06:05 crc kubenswrapper[4895]: I0129 09:06:05.234434 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3ea6f8-e1cd-41fe-8169-00fc80c995b5" path="/var/lib/kubelet/pods/7d3ea6f8-e1cd-41fe-8169-00fc80c995b5/volumes" Jan 29 09:06:05 crc kubenswrapper[4895]: I0129 09:06:05.236226 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbcad4af-7c93-4d6e-b825-42a586db5d81" path="/var/lib/kubelet/pods/cbcad4af-7c93-4d6e-b825-42a586db5d81/volumes" Jan 29 09:06:05 crc kubenswrapper[4895]: I0129 09:06:05.455542 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b1ce25b0-0fc4-4560-88ba-ee5261d106e9","Type":"ContainerStarted","Data":"88cb65e02ef5d8e37f61d206cbbebfc1470ebedc4e286e67441843f5e9da1ee1"} Jan 29 09:06:07 crc kubenswrapper[4895]: I0129 09:06:07.482952 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b1ce25b0-0fc4-4560-88ba-ee5261d106e9","Type":"ContainerStarted","Data":"b2267ca42fd343510c9ddfe1d9cba42fdac0015664cbff384c4873d172557087"} Jan 29 09:06:07 crc kubenswrapper[4895]: I0129 09:06:07.485737 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fb202ed2-1680-4411-83d3-4dcfdc317ac9","Type":"ContainerStarted","Data":"9af8881ca010dac845b980b190c2c0b3d8d238ea21ee13d836d595643de6e702"} Jan 29 09:06:39 crc kubenswrapper[4895]: I0129 09:06:39.873631 4895 generic.go:334] "Generic (PLEG): container finished" podID="b1ce25b0-0fc4-4560-88ba-ee5261d106e9" containerID="b2267ca42fd343510c9ddfe1d9cba42fdac0015664cbff384c4873d172557087" exitCode=0 Jan 29 09:06:39 crc kubenswrapper[4895]: I0129 09:06:39.873831 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b1ce25b0-0fc4-4560-88ba-ee5261d106e9","Type":"ContainerDied","Data":"b2267ca42fd343510c9ddfe1d9cba42fdac0015664cbff384c4873d172557087"} Jan 29 09:06:39 crc kubenswrapper[4895]: I0129 09:06:39.879092 4895 generic.go:334] "Generic (PLEG): container finished" podID="fb202ed2-1680-4411-83d3-4dcfdc317ac9" containerID="9af8881ca010dac845b980b190c2c0b3d8d238ea21ee13d836d595643de6e702" exitCode=0 Jan 29 09:06:39 crc kubenswrapper[4895]: I0129 09:06:39.879161 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fb202ed2-1680-4411-83d3-4dcfdc317ac9","Type":"ContainerDied","Data":"9af8881ca010dac845b980b190c2c0b3d8d238ea21ee13d836d595643de6e702"} Jan 29 09:06:40 crc kubenswrapper[4895]: I0129 09:06:40.893281 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fb202ed2-1680-4411-83d3-4dcfdc317ac9","Type":"ContainerStarted","Data":"f156db8e3ad77ff95b57fe713bcdcb03090066894c33fe0b7871807300294881"} Jan 29 09:06:40 crc kubenswrapper[4895]: I0129 09:06:40.894244 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 09:06:40 crc kubenswrapper[4895]: I0129 09:06:40.896324 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b1ce25b0-0fc4-4560-88ba-ee5261d106e9","Type":"ContainerStarted","Data":"556bd91fcf3ad097b2739bf015bf3f33479299b1f4fb95f4b45b4530658eb459"} Jan 29 09:06:40 crc kubenswrapper[4895]: I0129 09:06:40.896503 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:06:40 crc kubenswrapper[4895]: I0129 09:06:40.922831 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.922791983 podStartE2EDuration="37.922791983s" podCreationTimestamp="2026-01-29 09:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:06:40.91894304 +0000 UTC m=+1542.560451186" watchObservedRunningTime="2026-01-29 09:06:40.922791983 +0000 UTC m=+1542.564300139" Jan 29 09:06:46 crc kubenswrapper[4895]: I0129 09:06:46.026386 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:06:46 crc kubenswrapper[4895]: I0129 09:06:46.027144 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:06:53 crc kubenswrapper[4895]: I0129 09:06:53.879218 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 09:06:53 crc kubenswrapper[4895]: I0129 09:06:53.911019 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.91099258 podStartE2EDuration="50.91099258s" podCreationTimestamp="2026-01-29 09:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:06:40.952064391 +0000 UTC m=+1542.593572557" watchObservedRunningTime="2026-01-29 09:06:53.91099258 +0000 UTC m=+1555.552500726" Jan 29 09:06:53 crc kubenswrapper[4895]: I0129 09:06:53.923122 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 09:07:03 crc kubenswrapper[4895]: I0129 09:07:03.142947 4895 scope.go:117] "RemoveContainer" containerID="feb1b04e1d1f76a2715d4c8a5ddfd6f4184a37a5d293ae92129d0771b8f3b915" Jan 29 09:07:03 crc kubenswrapper[4895]: I0129 09:07:03.197523 4895 scope.go:117] "RemoveContainer" containerID="4f8a12db7447d5337b4d7655cd7a1fd7f96361e8a67c40734c577d37d31236cb" Jan 29 09:07:16 crc kubenswrapper[4895]: I0129 09:07:16.028062 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:07:16 crc kubenswrapper[4895]: I0129 09:07:16.028837 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:07:42 crc kubenswrapper[4895]: I0129 09:07:42.194740 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c6mgj"] Jan 29 09:07:42 crc kubenswrapper[4895]: I0129 09:07:42.198456 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:42 crc kubenswrapper[4895]: I0129 09:07:42.223346 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6mgj"] Jan 29 09:07:42 crc kubenswrapper[4895]: I0129 09:07:42.393508 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-catalog-content\") pod \"community-operators-c6mgj\" (UID: \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\") " pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:42 crc kubenswrapper[4895]: I0129 09:07:42.394021 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75gns\" (UniqueName: \"kubernetes.io/projected/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-kube-api-access-75gns\") pod \"community-operators-c6mgj\" (UID: \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\") " pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:42 crc kubenswrapper[4895]: I0129 09:07:42.394347 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-utilities\") pod \"community-operators-c6mgj\" (UID: \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\") " pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:42 crc kubenswrapper[4895]: I0129 09:07:42.496819 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-catalog-content\") pod \"community-operators-c6mgj\" (UID: \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\") " pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:42 crc kubenswrapper[4895]: I0129 09:07:42.496949 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75gns\" (UniqueName: \"kubernetes.io/projected/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-kube-api-access-75gns\") pod \"community-operators-c6mgj\" (UID: \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\") " pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:42 crc kubenswrapper[4895]: I0129 09:07:42.497036 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-utilities\") pod \"community-operators-c6mgj\" (UID: \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\") " pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:42 crc kubenswrapper[4895]: I0129 09:07:42.497448 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-catalog-content\") pod \"community-operators-c6mgj\" (UID: \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\") " pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:42 crc kubenswrapper[4895]: I0129 09:07:42.497537 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-utilities\") pod \"community-operators-c6mgj\" (UID: \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\") " pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:42 crc kubenswrapper[4895]: I0129 09:07:42.523262 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75gns\" (UniqueName: \"kubernetes.io/projected/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-kube-api-access-75gns\") pod \"community-operators-c6mgj\" (UID: \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\") " pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:42 crc kubenswrapper[4895]: I0129 09:07:42.541169 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:43 crc kubenswrapper[4895]: I0129 09:07:43.263625 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6mgj"] Jan 29 09:07:43 crc kubenswrapper[4895]: I0129 09:07:43.803958 4895 generic.go:334] "Generic (PLEG): container finished" podID="f527d1ca-fa3b-4f8b-93c6-285ae154ac52" containerID="9d62a59a81cc94ac01c73cd0f64d23922d4b35004d5e525a4d73fc97d88ce796" exitCode=0 Jan 29 09:07:43 crc kubenswrapper[4895]: I0129 09:07:43.804057 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6mgj" event={"ID":"f527d1ca-fa3b-4f8b-93c6-285ae154ac52","Type":"ContainerDied","Data":"9d62a59a81cc94ac01c73cd0f64d23922d4b35004d5e525a4d73fc97d88ce796"} Jan 29 09:07:43 crc kubenswrapper[4895]: I0129 09:07:43.804482 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6mgj" event={"ID":"f527d1ca-fa3b-4f8b-93c6-285ae154ac52","Type":"ContainerStarted","Data":"bc963f607ea8c264e6808654a0b85ea9f8e7d82b7bef48557379338f454837bb"} Jan 29 09:07:45 crc kubenswrapper[4895]: I0129 09:07:45.910793 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6mgj" event={"ID":"f527d1ca-fa3b-4f8b-93c6-285ae154ac52","Type":"ContainerStarted","Data":"604d96a237eecc38fa941de21b06323c9a4265496e354faa710b04b6afccb6ea"} Jan 29 09:07:46 crc kubenswrapper[4895]: I0129 09:07:46.020082 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:07:46 crc kubenswrapper[4895]: I0129 09:07:46.020544 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:07:46 crc kubenswrapper[4895]: I0129 09:07:46.020612 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 09:07:46 crc kubenswrapper[4895]: I0129 09:07:46.021227 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6"} pod="openshift-machine-config-operator/machine-config-daemon-z82hk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:07:46 crc kubenswrapper[4895]: I0129 09:07:46.021304 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" containerID="cri-o://cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" gracePeriod=600 Jan 29 09:07:46 crc kubenswrapper[4895]: E0129 09:07:46.145988 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:07:46 crc kubenswrapper[4895]: I0129 09:07:46.926639 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" exitCode=0 Jan 29 09:07:46 crc kubenswrapper[4895]: I0129 09:07:46.926731 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerDied","Data":"cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6"} Jan 29 09:07:46 crc kubenswrapper[4895]: I0129 09:07:46.926858 4895 scope.go:117] "RemoveContainer" containerID="2bf4fdb573e9460c60a2fe1e2302b28757eefe98ad1ae3c12a1c65609fd1bb38" Jan 29 09:07:46 crc kubenswrapper[4895]: I0129 09:07:46.927775 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:07:46 crc kubenswrapper[4895]: E0129 09:07:46.928779 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:07:46 crc kubenswrapper[4895]: I0129 09:07:46.931809 4895 generic.go:334] "Generic (PLEG): container finished" podID="f527d1ca-fa3b-4f8b-93c6-285ae154ac52" containerID="604d96a237eecc38fa941de21b06323c9a4265496e354faa710b04b6afccb6ea" exitCode=0 Jan 29 09:07:46 crc kubenswrapper[4895]: I0129 09:07:46.931989 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6mgj" event={"ID":"f527d1ca-fa3b-4f8b-93c6-285ae154ac52","Type":"ContainerDied","Data":"604d96a237eecc38fa941de21b06323c9a4265496e354faa710b04b6afccb6ea"} Jan 29 09:07:47 crc kubenswrapper[4895]: I0129 09:07:47.998331 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6mgj" event={"ID":"f527d1ca-fa3b-4f8b-93c6-285ae154ac52","Type":"ContainerStarted","Data":"4212cd9b3ced17feeff3f5c7df7846843bec647e7ed8b64558673d8a9aae37fc"} Jan 29 09:07:48 crc kubenswrapper[4895]: I0129 09:07:48.035246 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c6mgj" podStartSLOduration=2.480624389 podStartE2EDuration="6.035211956s" podCreationTimestamp="2026-01-29 09:07:42 +0000 UTC" firstStartedPulling="2026-01-29 09:07:43.806814643 +0000 UTC m=+1605.448322779" lastFinishedPulling="2026-01-29 09:07:47.3614022 +0000 UTC m=+1609.002910346" observedRunningTime="2026-01-29 09:07:48.020008362 +0000 UTC m=+1609.661516528" watchObservedRunningTime="2026-01-29 09:07:48.035211956 +0000 UTC m=+1609.676720122" Jan 29 09:07:52 crc kubenswrapper[4895]: I0129 09:07:52.541469 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:52 crc kubenswrapper[4895]: I0129 09:07:52.542541 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:52 crc kubenswrapper[4895]: I0129 09:07:52.600995 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:53 crc kubenswrapper[4895]: I0129 09:07:53.106530 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:53 crc kubenswrapper[4895]: I0129 09:07:53.175882 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c6mgj"] Jan 29 09:07:55 crc kubenswrapper[4895]: I0129 09:07:55.078653 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c6mgj" podUID="f527d1ca-fa3b-4f8b-93c6-285ae154ac52" containerName="registry-server" containerID="cri-o://4212cd9b3ced17feeff3f5c7df7846843bec647e7ed8b64558673d8a9aae37fc" gracePeriod=2 Jan 29 09:07:55 crc kubenswrapper[4895]: I0129 09:07:55.602294 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:55 crc kubenswrapper[4895]: I0129 09:07:55.618477 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-utilities\") pod \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\" (UID: \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\") " Jan 29 09:07:55 crc kubenswrapper[4895]: I0129 09:07:55.618597 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-catalog-content\") pod \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\" (UID: \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\") " Jan 29 09:07:55 crc kubenswrapper[4895]: I0129 09:07:55.618652 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75gns\" (UniqueName: \"kubernetes.io/projected/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-kube-api-access-75gns\") pod \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\" (UID: \"f527d1ca-fa3b-4f8b-93c6-285ae154ac52\") " Jan 29 09:07:55 crc kubenswrapper[4895]: I0129 09:07:55.619999 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-utilities" (OuterVolumeSpecName: "utilities") pod "f527d1ca-fa3b-4f8b-93c6-285ae154ac52" (UID: "f527d1ca-fa3b-4f8b-93c6-285ae154ac52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:07:55 crc kubenswrapper[4895]: I0129 09:07:55.631133 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-kube-api-access-75gns" (OuterVolumeSpecName: "kube-api-access-75gns") pod "f527d1ca-fa3b-4f8b-93c6-285ae154ac52" (UID: "f527d1ca-fa3b-4f8b-93c6-285ae154ac52"). InnerVolumeSpecName "kube-api-access-75gns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:07:55 crc kubenswrapper[4895]: I0129 09:07:55.684349 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f527d1ca-fa3b-4f8b-93c6-285ae154ac52" (UID: "f527d1ca-fa3b-4f8b-93c6-285ae154ac52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:07:55 crc kubenswrapper[4895]: I0129 09:07:55.721881 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:07:55 crc kubenswrapper[4895]: I0129 09:07:55.722260 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75gns\" (UniqueName: \"kubernetes.io/projected/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-kube-api-access-75gns\") on node \"crc\" DevicePath \"\"" Jan 29 09:07:55 crc kubenswrapper[4895]: I0129 09:07:55.722338 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f527d1ca-fa3b-4f8b-93c6-285ae154ac52-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.093021 4895 generic.go:334] "Generic (PLEG): container finished" podID="f527d1ca-fa3b-4f8b-93c6-285ae154ac52" containerID="4212cd9b3ced17feeff3f5c7df7846843bec647e7ed8b64558673d8a9aae37fc" exitCode=0 Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.093089 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6mgj" event={"ID":"f527d1ca-fa3b-4f8b-93c6-285ae154ac52","Type":"ContainerDied","Data":"4212cd9b3ced17feeff3f5c7df7846843bec647e7ed8b64558673d8a9aae37fc"} Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.093120 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6mgj" Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.093144 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6mgj" event={"ID":"f527d1ca-fa3b-4f8b-93c6-285ae154ac52","Type":"ContainerDied","Data":"bc963f607ea8c264e6808654a0b85ea9f8e7d82b7bef48557379338f454837bb"} Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.093170 4895 scope.go:117] "RemoveContainer" containerID="4212cd9b3ced17feeff3f5c7df7846843bec647e7ed8b64558673d8a9aae37fc" Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.122418 4895 scope.go:117] "RemoveContainer" containerID="604d96a237eecc38fa941de21b06323c9a4265496e354faa710b04b6afccb6ea" Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.135134 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c6mgj"] Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.147202 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c6mgj"] Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.159657 4895 scope.go:117] "RemoveContainer" containerID="9d62a59a81cc94ac01c73cd0f64d23922d4b35004d5e525a4d73fc97d88ce796" Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.219085 4895 scope.go:117] "RemoveContainer" containerID="4212cd9b3ced17feeff3f5c7df7846843bec647e7ed8b64558673d8a9aae37fc" Jan 29 09:07:56 crc kubenswrapper[4895]: E0129 09:07:56.219666 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4212cd9b3ced17feeff3f5c7df7846843bec647e7ed8b64558673d8a9aae37fc\": container with ID starting with 4212cd9b3ced17feeff3f5c7df7846843bec647e7ed8b64558673d8a9aae37fc not found: ID does not exist" containerID="4212cd9b3ced17feeff3f5c7df7846843bec647e7ed8b64558673d8a9aae37fc" Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.219767 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4212cd9b3ced17feeff3f5c7df7846843bec647e7ed8b64558673d8a9aae37fc"} err="failed to get container status \"4212cd9b3ced17feeff3f5c7df7846843bec647e7ed8b64558673d8a9aae37fc\": rpc error: code = NotFound desc = could not find container \"4212cd9b3ced17feeff3f5c7df7846843bec647e7ed8b64558673d8a9aae37fc\": container with ID starting with 4212cd9b3ced17feeff3f5c7df7846843bec647e7ed8b64558673d8a9aae37fc not found: ID does not exist" Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.219815 4895 scope.go:117] "RemoveContainer" containerID="604d96a237eecc38fa941de21b06323c9a4265496e354faa710b04b6afccb6ea" Jan 29 09:07:56 crc kubenswrapper[4895]: E0129 09:07:56.220336 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"604d96a237eecc38fa941de21b06323c9a4265496e354faa710b04b6afccb6ea\": container with ID starting with 604d96a237eecc38fa941de21b06323c9a4265496e354faa710b04b6afccb6ea not found: ID does not exist" containerID="604d96a237eecc38fa941de21b06323c9a4265496e354faa710b04b6afccb6ea" Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.220385 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604d96a237eecc38fa941de21b06323c9a4265496e354faa710b04b6afccb6ea"} err="failed to get container status \"604d96a237eecc38fa941de21b06323c9a4265496e354faa710b04b6afccb6ea\": rpc error: code = NotFound desc = could not find container \"604d96a237eecc38fa941de21b06323c9a4265496e354faa710b04b6afccb6ea\": container with ID starting with 604d96a237eecc38fa941de21b06323c9a4265496e354faa710b04b6afccb6ea not found: ID does not exist" Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.220424 4895 scope.go:117] "RemoveContainer" containerID="9d62a59a81cc94ac01c73cd0f64d23922d4b35004d5e525a4d73fc97d88ce796" Jan 29 09:07:56 crc kubenswrapper[4895]: E0129 09:07:56.220720 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d62a59a81cc94ac01c73cd0f64d23922d4b35004d5e525a4d73fc97d88ce796\": container with ID starting with 9d62a59a81cc94ac01c73cd0f64d23922d4b35004d5e525a4d73fc97d88ce796 not found: ID does not exist" containerID="9d62a59a81cc94ac01c73cd0f64d23922d4b35004d5e525a4d73fc97d88ce796" Jan 29 09:07:56 crc kubenswrapper[4895]: I0129 09:07:56.220753 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d62a59a81cc94ac01c73cd0f64d23922d4b35004d5e525a4d73fc97d88ce796"} err="failed to get container status \"9d62a59a81cc94ac01c73cd0f64d23922d4b35004d5e525a4d73fc97d88ce796\": rpc error: code = NotFound desc = could not find container \"9d62a59a81cc94ac01c73cd0f64d23922d4b35004d5e525a4d73fc97d88ce796\": container with ID starting with 9d62a59a81cc94ac01c73cd0f64d23922d4b35004d5e525a4d73fc97d88ce796 not found: ID does not exist" Jan 29 09:07:57 crc kubenswrapper[4895]: I0129 09:07:57.226016 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f527d1ca-fa3b-4f8b-93c6-285ae154ac52" path="/var/lib/kubelet/pods/f527d1ca-fa3b-4f8b-93c6-285ae154ac52/volumes" Jan 29 09:08:02 crc kubenswrapper[4895]: I0129 09:08:02.267935 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:08:02 crc kubenswrapper[4895]: E0129 09:08:02.272874 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:08:03 crc kubenswrapper[4895]: I0129 09:08:03.357847 4895 scope.go:117] "RemoveContainer" containerID="01112b4ca8253d6cebd5d281ced6a739d13c93332d678dedfe7ff9541f3e090d" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.211253 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:08:13 crc kubenswrapper[4895]: E0129 09:08:13.212334 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.732116 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p25ht"] Jan 29 09:08:13 crc kubenswrapper[4895]: E0129 09:08:13.732839 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f527d1ca-fa3b-4f8b-93c6-285ae154ac52" containerName="extract-utilities" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.732868 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f527d1ca-fa3b-4f8b-93c6-285ae154ac52" containerName="extract-utilities" Jan 29 09:08:13 crc kubenswrapper[4895]: E0129 09:08:13.732940 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f527d1ca-fa3b-4f8b-93c6-285ae154ac52" containerName="registry-server" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.732952 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f527d1ca-fa3b-4f8b-93c6-285ae154ac52" containerName="registry-server" Jan 29 09:08:13 crc kubenswrapper[4895]: E0129 09:08:13.732972 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f527d1ca-fa3b-4f8b-93c6-285ae154ac52" containerName="extract-content" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.732981 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f527d1ca-fa3b-4f8b-93c6-285ae154ac52" containerName="extract-content" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.733254 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f527d1ca-fa3b-4f8b-93c6-285ae154ac52" containerName="registry-server" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.735521 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.768928 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p25ht"] Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.847734 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlxtj\" (UniqueName: \"kubernetes.io/projected/e2096686-0f0e-4a3d-ac33-e700d3c76753-kube-api-access-qlxtj\") pod \"redhat-marketplace-p25ht\" (UID: \"e2096686-0f0e-4a3d-ac33-e700d3c76753\") " pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.848104 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2096686-0f0e-4a3d-ac33-e700d3c76753-utilities\") pod \"redhat-marketplace-p25ht\" (UID: \"e2096686-0f0e-4a3d-ac33-e700d3c76753\") " pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.848570 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2096686-0f0e-4a3d-ac33-e700d3c76753-catalog-content\") pod \"redhat-marketplace-p25ht\" (UID: \"e2096686-0f0e-4a3d-ac33-e700d3c76753\") " pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.986957 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2096686-0f0e-4a3d-ac33-e700d3c76753-catalog-content\") pod \"redhat-marketplace-p25ht\" (UID: \"e2096686-0f0e-4a3d-ac33-e700d3c76753\") " pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.987350 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlxtj\" (UniqueName: \"kubernetes.io/projected/e2096686-0f0e-4a3d-ac33-e700d3c76753-kube-api-access-qlxtj\") pod \"redhat-marketplace-p25ht\" (UID: \"e2096686-0f0e-4a3d-ac33-e700d3c76753\") " pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.987447 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2096686-0f0e-4a3d-ac33-e700d3c76753-utilities\") pod \"redhat-marketplace-p25ht\" (UID: \"e2096686-0f0e-4a3d-ac33-e700d3c76753\") " pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.988329 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2096686-0f0e-4a3d-ac33-e700d3c76753-utilities\") pod \"redhat-marketplace-p25ht\" (UID: \"e2096686-0f0e-4a3d-ac33-e700d3c76753\") " pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:13 crc kubenswrapper[4895]: I0129 09:08:13.988466 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2096686-0f0e-4a3d-ac33-e700d3c76753-catalog-content\") pod \"redhat-marketplace-p25ht\" (UID: \"e2096686-0f0e-4a3d-ac33-e700d3c76753\") " pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:14 crc kubenswrapper[4895]: I0129 09:08:14.020004 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlxtj\" (UniqueName: \"kubernetes.io/projected/e2096686-0f0e-4a3d-ac33-e700d3c76753-kube-api-access-qlxtj\") pod \"redhat-marketplace-p25ht\" (UID: \"e2096686-0f0e-4a3d-ac33-e700d3c76753\") " pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:14 crc kubenswrapper[4895]: I0129 09:08:14.102895 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:14 crc kubenswrapper[4895]: I0129 09:08:14.664017 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p25ht"] Jan 29 09:08:15 crc kubenswrapper[4895]: I0129 09:08:15.451719 4895 generic.go:334] "Generic (PLEG): container finished" podID="e2096686-0f0e-4a3d-ac33-e700d3c76753" containerID="01c78224b38421c6bead9cab323084bdada7b612173bbc8b415fa7527be2d63c" exitCode=0 Jan 29 09:08:15 crc kubenswrapper[4895]: I0129 09:08:15.451809 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p25ht" event={"ID":"e2096686-0f0e-4a3d-ac33-e700d3c76753","Type":"ContainerDied","Data":"01c78224b38421c6bead9cab323084bdada7b612173bbc8b415fa7527be2d63c"} Jan 29 09:08:15 crc kubenswrapper[4895]: I0129 09:08:15.452712 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p25ht" event={"ID":"e2096686-0f0e-4a3d-ac33-e700d3c76753","Type":"ContainerStarted","Data":"f66b479c85653229b5be847a51e95db0d6bbf3691146fd49b978905f38a7044a"} Jan 29 09:08:15 crc kubenswrapper[4895]: I0129 09:08:15.455198 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:08:17 crc kubenswrapper[4895]: I0129 09:08:17.494815 4895 generic.go:334] "Generic (PLEG): container finished" podID="e2096686-0f0e-4a3d-ac33-e700d3c76753" containerID="685e19e645c3e22da5887ee2243729f745d4e73c71da97023347983589aed208" exitCode=0 Jan 29 09:08:17 crc kubenswrapper[4895]: I0129 09:08:17.494931 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p25ht" event={"ID":"e2096686-0f0e-4a3d-ac33-e700d3c76753","Type":"ContainerDied","Data":"685e19e645c3e22da5887ee2243729f745d4e73c71da97023347983589aed208"} Jan 29 09:08:18 crc kubenswrapper[4895]: I0129 09:08:18.511655 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p25ht" event={"ID":"e2096686-0f0e-4a3d-ac33-e700d3c76753","Type":"ContainerStarted","Data":"2368e37c650625a2f4a9949d1d96377ce3a67bccd698aa7021e62e921f537f70"} Jan 29 09:08:24 crc kubenswrapper[4895]: I0129 09:08:24.103138 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:24 crc kubenswrapper[4895]: I0129 09:08:24.104022 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:24 crc kubenswrapper[4895]: I0129 09:08:24.158996 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:24 crc kubenswrapper[4895]: I0129 09:08:24.191143 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p25ht" podStartSLOduration=8.698674932 podStartE2EDuration="11.191107655s" podCreationTimestamp="2026-01-29 09:08:13 +0000 UTC" firstStartedPulling="2026-01-29 09:08:15.454775276 +0000 UTC m=+1637.096283422" lastFinishedPulling="2026-01-29 09:08:17.947207999 +0000 UTC m=+1639.588716145" observedRunningTime="2026-01-29 09:08:18.577859948 +0000 UTC m=+1640.219368114" watchObservedRunningTime="2026-01-29 09:08:24.191107655 +0000 UTC m=+1645.832615801" Jan 29 09:08:24 crc kubenswrapper[4895]: I0129 09:08:24.633774 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:24 crc kubenswrapper[4895]: I0129 09:08:24.707999 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p25ht"] Jan 29 09:08:26 crc kubenswrapper[4895]: I0129 09:08:26.616423 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p25ht" podUID="e2096686-0f0e-4a3d-ac33-e700d3c76753" containerName="registry-server" containerID="cri-o://2368e37c650625a2f4a9949d1d96377ce3a67bccd698aa7021e62e921f537f70" gracePeriod=2 Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.111269 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.196687 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlxtj\" (UniqueName: \"kubernetes.io/projected/e2096686-0f0e-4a3d-ac33-e700d3c76753-kube-api-access-qlxtj\") pod \"e2096686-0f0e-4a3d-ac33-e700d3c76753\" (UID: \"e2096686-0f0e-4a3d-ac33-e700d3c76753\") " Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.196979 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2096686-0f0e-4a3d-ac33-e700d3c76753-catalog-content\") pod \"e2096686-0f0e-4a3d-ac33-e700d3c76753\" (UID: \"e2096686-0f0e-4a3d-ac33-e700d3c76753\") " Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.197014 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2096686-0f0e-4a3d-ac33-e700d3c76753-utilities\") pod \"e2096686-0f0e-4a3d-ac33-e700d3c76753\" (UID: \"e2096686-0f0e-4a3d-ac33-e700d3c76753\") " Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.201577 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2096686-0f0e-4a3d-ac33-e700d3c76753-utilities" (OuterVolumeSpecName: "utilities") pod "e2096686-0f0e-4a3d-ac33-e700d3c76753" (UID: "e2096686-0f0e-4a3d-ac33-e700d3c76753"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.228122 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2096686-0f0e-4a3d-ac33-e700d3c76753-kube-api-access-qlxtj" (OuterVolumeSpecName: "kube-api-access-qlxtj") pod "e2096686-0f0e-4a3d-ac33-e700d3c76753" (UID: "e2096686-0f0e-4a3d-ac33-e700d3c76753"). InnerVolumeSpecName "kube-api-access-qlxtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.230768 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2096686-0f0e-4a3d-ac33-e700d3c76753-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2096686-0f0e-4a3d-ac33-e700d3c76753" (UID: "e2096686-0f0e-4a3d-ac33-e700d3c76753"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.303839 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2096686-0f0e-4a3d-ac33-e700d3c76753-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.304148 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2096686-0f0e-4a3d-ac33-e700d3c76753-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.304228 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlxtj\" (UniqueName: \"kubernetes.io/projected/e2096686-0f0e-4a3d-ac33-e700d3c76753-kube-api-access-qlxtj\") on node \"crc\" DevicePath \"\"" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.632438 4895 generic.go:334] "Generic (PLEG): container finished" podID="e2096686-0f0e-4a3d-ac33-e700d3c76753" containerID="2368e37c650625a2f4a9949d1d96377ce3a67bccd698aa7021e62e921f537f70" exitCode=0 Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.632513 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p25ht" event={"ID":"e2096686-0f0e-4a3d-ac33-e700d3c76753","Type":"ContainerDied","Data":"2368e37c650625a2f4a9949d1d96377ce3a67bccd698aa7021e62e921f537f70"} Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.632559 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p25ht" event={"ID":"e2096686-0f0e-4a3d-ac33-e700d3c76753","Type":"ContainerDied","Data":"f66b479c85653229b5be847a51e95db0d6bbf3691146fd49b978905f38a7044a"} Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.632564 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p25ht" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.632587 4895 scope.go:117] "RemoveContainer" containerID="2368e37c650625a2f4a9949d1d96377ce3a67bccd698aa7021e62e921f537f70" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.675389 4895 scope.go:117] "RemoveContainer" containerID="685e19e645c3e22da5887ee2243729f745d4e73c71da97023347983589aed208" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.681171 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p25ht"] Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.696358 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p25ht"] Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.700511 4895 scope.go:117] "RemoveContainer" containerID="01c78224b38421c6bead9cab323084bdada7b612173bbc8b415fa7527be2d63c" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.762946 4895 scope.go:117] "RemoveContainer" containerID="2368e37c650625a2f4a9949d1d96377ce3a67bccd698aa7021e62e921f537f70" Jan 29 09:08:27 crc kubenswrapper[4895]: E0129 09:08:27.763730 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2368e37c650625a2f4a9949d1d96377ce3a67bccd698aa7021e62e921f537f70\": container with ID starting with 2368e37c650625a2f4a9949d1d96377ce3a67bccd698aa7021e62e921f537f70 not found: ID does not exist" containerID="2368e37c650625a2f4a9949d1d96377ce3a67bccd698aa7021e62e921f537f70" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.763781 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2368e37c650625a2f4a9949d1d96377ce3a67bccd698aa7021e62e921f537f70"} err="failed to get container status \"2368e37c650625a2f4a9949d1d96377ce3a67bccd698aa7021e62e921f537f70\": rpc error: code = NotFound desc = could not find container \"2368e37c650625a2f4a9949d1d96377ce3a67bccd698aa7021e62e921f537f70\": container with ID starting with 2368e37c650625a2f4a9949d1d96377ce3a67bccd698aa7021e62e921f537f70 not found: ID does not exist" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.763814 4895 scope.go:117] "RemoveContainer" containerID="685e19e645c3e22da5887ee2243729f745d4e73c71da97023347983589aed208" Jan 29 09:08:27 crc kubenswrapper[4895]: E0129 09:08:27.764498 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"685e19e645c3e22da5887ee2243729f745d4e73c71da97023347983589aed208\": container with ID starting with 685e19e645c3e22da5887ee2243729f745d4e73c71da97023347983589aed208 not found: ID does not exist" containerID="685e19e645c3e22da5887ee2243729f745d4e73c71da97023347983589aed208" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.764558 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"685e19e645c3e22da5887ee2243729f745d4e73c71da97023347983589aed208"} err="failed to get container status \"685e19e645c3e22da5887ee2243729f745d4e73c71da97023347983589aed208\": rpc error: code = NotFound desc = could not find container \"685e19e645c3e22da5887ee2243729f745d4e73c71da97023347983589aed208\": container with ID starting with 685e19e645c3e22da5887ee2243729f745d4e73c71da97023347983589aed208 not found: ID does not exist" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.764596 4895 scope.go:117] "RemoveContainer" containerID="01c78224b38421c6bead9cab323084bdada7b612173bbc8b415fa7527be2d63c" Jan 29 09:08:27 crc kubenswrapper[4895]: E0129 09:08:27.765088 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01c78224b38421c6bead9cab323084bdada7b612173bbc8b415fa7527be2d63c\": container with ID starting with 01c78224b38421c6bead9cab323084bdada7b612173bbc8b415fa7527be2d63c not found: ID does not exist" containerID="01c78224b38421c6bead9cab323084bdada7b612173bbc8b415fa7527be2d63c" Jan 29 09:08:27 crc kubenswrapper[4895]: I0129 09:08:27.765164 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c78224b38421c6bead9cab323084bdada7b612173bbc8b415fa7527be2d63c"} err="failed to get container status \"01c78224b38421c6bead9cab323084bdada7b612173bbc8b415fa7527be2d63c\": rpc error: code = NotFound desc = could not find container \"01c78224b38421c6bead9cab323084bdada7b612173bbc8b415fa7527be2d63c\": container with ID starting with 01c78224b38421c6bead9cab323084bdada7b612173bbc8b415fa7527be2d63c not found: ID does not exist" Jan 29 09:08:28 crc kubenswrapper[4895]: I0129 09:08:28.211298 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:08:28 crc kubenswrapper[4895]: E0129 09:08:28.211802 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:08:29 crc kubenswrapper[4895]: I0129 09:08:29.226436 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2096686-0f0e-4a3d-ac33-e700d3c76753" path="/var/lib/kubelet/pods/e2096686-0f0e-4a3d-ac33-e700d3c76753/volumes" Jan 29 09:08:42 crc kubenswrapper[4895]: I0129 09:08:42.211996 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:08:42 crc kubenswrapper[4895]: E0129 09:08:42.214863 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:08:54 crc kubenswrapper[4895]: I0129 09:08:54.211441 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:08:54 crc kubenswrapper[4895]: E0129 09:08:54.212436 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:09:06 crc kubenswrapper[4895]: I0129 09:09:06.211555 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:09:06 crc kubenswrapper[4895]: E0129 09:09:06.212749 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:09:18 crc kubenswrapper[4895]: I0129 09:09:18.211979 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:09:18 crc kubenswrapper[4895]: E0129 09:09:18.213044 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:09:32 crc kubenswrapper[4895]: I0129 09:09:32.211658 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:09:32 crc kubenswrapper[4895]: E0129 09:09:32.212625 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:09:45 crc kubenswrapper[4895]: I0129 09:09:45.212662 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:09:45 crc kubenswrapper[4895]: E0129 09:09:45.214260 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:09:57 crc kubenswrapper[4895]: I0129 09:09:57.211950 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:09:57 crc kubenswrapper[4895]: E0129 09:09:57.212889 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:10:03 crc kubenswrapper[4895]: I0129 09:10:03.054266 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-b99gm"] Jan 29 09:10:03 crc kubenswrapper[4895]: I0129 09:10:03.066679 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-b99gm"] Jan 29 09:10:03 crc kubenswrapper[4895]: I0129 09:10:03.238571 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="617fe48f-9b10-427c-aab3-1d2619c7bb09" path="/var/lib/kubelet/pods/617fe48f-9b10-427c-aab3-1d2619c7bb09/volumes" Jan 29 09:10:03 crc kubenswrapper[4895]: I0129 09:10:03.507354 4895 scope.go:117] "RemoveContainer" containerID="40d7d2a28048f4c5b2109fcf99283eeac6dacb8837828e255ea08022393a1069" Jan 29 09:10:03 crc kubenswrapper[4895]: I0129 09:10:03.545552 4895 scope.go:117] "RemoveContainer" containerID="10fb10f56bb393b8f9d11f0fe884375099897712719cf571e0194e4ff3d78552" Jan 29 09:10:05 crc kubenswrapper[4895]: I0129 09:10:05.046569 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-eb0f-account-create-update-8hlmq"] Jan 29 09:10:05 crc kubenswrapper[4895]: I0129 09:10:05.066123 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5b5f-account-create-update-8gmtx"] Jan 29 09:10:05 crc kubenswrapper[4895]: I0129 09:10:05.081789 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4r786"] Jan 29 09:10:05 crc kubenswrapper[4895]: I0129 09:10:05.091882 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-eb0f-account-create-update-8hlmq"] Jan 29 09:10:05 crc kubenswrapper[4895]: I0129 09:10:05.104024 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4r786"] Jan 29 09:10:05 crc kubenswrapper[4895]: I0129 09:10:05.116601 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5b5f-account-create-update-8gmtx"] Jan 29 09:10:05 crc kubenswrapper[4895]: I0129 09:10:05.231979 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d29420-dc1a-4983-b157-59364db98935" path="/var/lib/kubelet/pods/42d29420-dc1a-4983-b157-59364db98935/volumes" Jan 29 09:10:05 crc kubenswrapper[4895]: I0129 09:10:05.233881 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="508f7a95-0307-4a55-8e27-29e44a6823ef" path="/var/lib/kubelet/pods/508f7a95-0307-4a55-8e27-29e44a6823ef/volumes" Jan 29 09:10:05 crc kubenswrapper[4895]: I0129 09:10:05.234983 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad71f34b-a6b5-4947-9dab-83d1655a8d7a" path="/var/lib/kubelet/pods/ad71f34b-a6b5-4947-9dab-83d1655a8d7a/volumes" Jan 29 09:10:06 crc kubenswrapper[4895]: I0129 09:10:06.032708 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f408-account-create-update-9lsgg"] Jan 29 09:10:06 crc kubenswrapper[4895]: I0129 09:10:06.046684 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-fzmwt"] Jan 29 09:10:06 crc kubenswrapper[4895]: I0129 09:10:06.063828 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-fzmwt"] Jan 29 09:10:06 crc kubenswrapper[4895]: I0129 09:10:06.076465 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f408-account-create-update-9lsgg"] Jan 29 09:10:07 crc kubenswrapper[4895]: I0129 09:10:07.224505 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c111f64-ae01-431e-a625-7b2131a90998" path="/var/lib/kubelet/pods/1c111f64-ae01-431e-a625-7b2131a90998/volumes" Jan 29 09:10:07 crc kubenswrapper[4895]: I0129 09:10:07.225570 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d38c12b-6fa7-411f-91bf-a0d0a6ed733c" path="/var/lib/kubelet/pods/9d38c12b-6fa7-411f-91bf-a0d0a6ed733c/volumes" Jan 29 09:10:09 crc kubenswrapper[4895]: I0129 09:10:09.219482 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:10:09 crc kubenswrapper[4895]: E0129 09:10:09.220024 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:10:23 crc kubenswrapper[4895]: I0129 09:10:23.212334 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:10:23 crc kubenswrapper[4895]: E0129 09:10:23.214714 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:10:30 crc kubenswrapper[4895]: I0129 09:10:30.060835 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8cvs2"] Jan 29 09:10:30 crc kubenswrapper[4895]: I0129 09:10:30.071954 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8cvs2"] Jan 29 09:10:31 crc kubenswrapper[4895]: I0129 09:10:31.227043 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe994737-b654-4e7f-bd3f-672e069bdda0" path="/var/lib/kubelet/pods/fe994737-b654-4e7f-bd3f-672e069bdda0/volumes" Jan 29 09:10:34 crc kubenswrapper[4895]: I0129 09:10:34.210890 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:10:34 crc kubenswrapper[4895]: E0129 09:10:34.211792 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:10:45 crc kubenswrapper[4895]: I0129 09:10:45.212694 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:10:45 crc kubenswrapper[4895]: E0129 09:10:45.213733 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:10:51 crc kubenswrapper[4895]: I0129 09:10:51.057446 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-pddx8"] Jan 29 09:10:51 crc kubenswrapper[4895]: I0129 09:10:51.070043 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-j6gzt"] Jan 29 09:10:51 crc kubenswrapper[4895]: I0129 09:10:51.084659 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-pddx8"] Jan 29 09:10:51 crc kubenswrapper[4895]: I0129 09:10:51.107864 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-j6gzt"] Jan 29 09:10:51 crc kubenswrapper[4895]: I0129 09:10:51.226844 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be94e72-1174-4bf4-8706-387fef234ccb" path="/var/lib/kubelet/pods/0be94e72-1174-4bf4-8706-387fef234ccb/volumes" Jan 29 09:10:51 crc kubenswrapper[4895]: I0129 09:10:51.228149 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18ef183c-59b1-4d07-831f-db71d6f978b8" path="/var/lib/kubelet/pods/18ef183c-59b1-4d07-831f-db71d6f978b8/volumes" Jan 29 09:10:56 crc kubenswrapper[4895]: I0129 09:10:56.048154 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-da19-account-create-update-vdswv"] Jan 29 09:10:56 crc kubenswrapper[4895]: I0129 09:10:56.062943 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-da19-account-create-update-vdswv"] Jan 29 09:10:56 crc kubenswrapper[4895]: I0129 09:10:56.075416 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d597-account-create-update-ks9x5"] Jan 29 09:10:56 crc kubenswrapper[4895]: I0129 09:10:56.085353 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d6b8-account-create-update-j4zvh"] Jan 29 09:10:56 crc kubenswrapper[4895]: I0129 09:10:56.095388 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-544ht"] Jan 29 09:10:56 crc kubenswrapper[4895]: I0129 09:10:56.106234 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d6b8-account-create-update-j4zvh"] Jan 29 09:10:56 crc kubenswrapper[4895]: I0129 09:10:56.117107 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-544ht"] Jan 29 09:10:56 crc kubenswrapper[4895]: I0129 09:10:56.131149 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d597-account-create-update-ks9x5"] Jan 29 09:10:57 crc kubenswrapper[4895]: I0129 09:10:57.226288 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cac20b0-8086-44e6-8ef4-cf184b849ee3" path="/var/lib/kubelet/pods/2cac20b0-8086-44e6-8ef4-cf184b849ee3/volumes" Jan 29 09:10:57 crc kubenswrapper[4895]: I0129 09:10:57.227393 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd9642f6-4325-49a2-bad8-71b3383cc5ca" path="/var/lib/kubelet/pods/dd9642f6-4325-49a2-bad8-71b3383cc5ca/volumes" Jan 29 09:10:57 crc kubenswrapper[4895]: I0129 09:10:57.228883 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d0ec49-1480-4452-a955-1ee612064f8a" path="/var/lib/kubelet/pods/f8d0ec49-1480-4452-a955-1ee612064f8a/volumes" Jan 29 09:10:57 crc kubenswrapper[4895]: I0129 09:10:57.229974 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb" path="/var/lib/kubelet/pods/f8e3a8fb-aec9-4c97-9ea6-f4987d0b64eb/volumes" Jan 29 09:10:58 crc kubenswrapper[4895]: I0129 09:10:58.042669 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kchhm"] Jan 29 09:10:58 crc kubenswrapper[4895]: I0129 09:10:58.056895 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kchhm"] Jan 29 09:10:58 crc kubenswrapper[4895]: I0129 09:10:58.212426 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:10:58 crc kubenswrapper[4895]: E0129 09:10:58.212755 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:10:59 crc kubenswrapper[4895]: I0129 09:10:59.228529 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7e86824-b384-45ea-b4bb-946f795bc9c5" path="/var/lib/kubelet/pods/a7e86824-b384-45ea-b4bb-946f795bc9c5/volumes" Jan 29 09:11:03 crc kubenswrapper[4895]: I0129 09:11:03.042003 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-sd8bs"] Jan 29 09:11:03 crc kubenswrapper[4895]: I0129 09:11:03.054474 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-sd8bs"] Jan 29 09:11:03 crc kubenswrapper[4895]: I0129 09:11:03.228761 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc7fffcb-533d-439c-ae0d-7b0bbe9d5480" path="/var/lib/kubelet/pods/bc7fffcb-533d-439c-ae0d-7b0bbe9d5480/volumes" Jan 29 09:11:03 crc kubenswrapper[4895]: I0129 09:11:03.616076 4895 scope.go:117] "RemoveContainer" containerID="8bc8e26b49a1d1a10666ea7e88a0eaaee0c4ab11f5e437464ccdac53743fa15e" Jan 29 09:11:03 crc kubenswrapper[4895]: I0129 09:11:03.647124 4895 scope.go:117] "RemoveContainer" containerID="677d52ec519ee0e2c063f859d7b06ac85224fc30cb82f12b7b8b32a441406836" Jan 29 09:11:03 crc kubenswrapper[4895]: I0129 09:11:03.677423 4895 scope.go:117] "RemoveContainer" containerID="1d0f8de9b213f39cab42aee331be1f6bd5501ff5305cef3b30a26d261f8a0e63" Jan 29 09:11:03 crc kubenswrapper[4895]: I0129 09:11:03.744563 4895 scope.go:117] "RemoveContainer" containerID="0b2d0147c3e633071187bf910a0446568146f2be77738176d1da4326554272fd" Jan 29 09:11:03 crc kubenswrapper[4895]: I0129 09:11:03.813285 4895 scope.go:117] "RemoveContainer" containerID="70893ed3120f75e62eaa24fc4bcc1664f9fdeb2a7b0c4a20ef73232c1787a9d5" Jan 29 09:11:03 crc kubenswrapper[4895]: I0129 09:11:03.858130 4895 scope.go:117] "RemoveContainer" containerID="36498f6b0caddda96dc7a8cc3b6b8e4c9ee7fb15d1a37d06719dc74b6f19ffc8" Jan 29 09:11:03 crc kubenswrapper[4895]: I0129 09:11:03.929463 4895 scope.go:117] "RemoveContainer" containerID="2c4d79873a0bf753d61b4cb09a0f82ba5b2b3d31f8e533474572322c6bdc1e25" Jan 29 09:11:03 crc kubenswrapper[4895]: I0129 09:11:03.959051 4895 scope.go:117] "RemoveContainer" containerID="5caf7ff5a638d9ebbcc04049d656a4ef8bd381aa7d33d7d5b4a4fcaa8a524084" Jan 29 09:11:03 crc kubenswrapper[4895]: I0129 09:11:03.987653 4895 scope.go:117] "RemoveContainer" containerID="899a094a20448dc2bccb6eb2d248a52ab39e84bbc1ca2a0b16cb6a9cb1bba65f" Jan 29 09:11:04 crc kubenswrapper[4895]: I0129 09:11:04.016433 4895 scope.go:117] "RemoveContainer" containerID="3ea3cee743d6c39820726922743e6c1ca138f7432d76efaed68c43f9765d8b9e" Jan 29 09:11:04 crc kubenswrapper[4895]: I0129 09:11:04.045666 4895 scope.go:117] "RemoveContainer" containerID="5341d76479aba4ced0010f5b26810e3a782acdbbdb0b8c183dc0c629377544c3" Jan 29 09:11:04 crc kubenswrapper[4895]: I0129 09:11:04.068555 4895 scope.go:117] "RemoveContainer" containerID="c1f0cb2ad269867bb442306bd648c59a4e7c43f2d5bdc95b54c210db4c2a7dfd" Jan 29 09:11:04 crc kubenswrapper[4895]: I0129 09:11:04.096869 4895 scope.go:117] "RemoveContainer" containerID="3f5b6e9f53a8a3c06bf166d8e38ab7424f333f546481bf66b53683e2d1930742" Jan 29 09:11:04 crc kubenswrapper[4895]: I0129 09:11:04.122561 4895 scope.go:117] "RemoveContainer" containerID="be7ada56dc9f66f2c8ee6c305dd374dd9bf5b9a47e7763b4172ec47cfeec8590" Jan 29 09:11:04 crc kubenswrapper[4895]: I0129 09:11:04.153711 4895 scope.go:117] "RemoveContainer" containerID="077d409bc60227d4a6ca64441f83a2f2df9558d9b64fc524a6b74ab4808e4dc7" Jan 29 09:11:04 crc kubenswrapper[4895]: I0129 09:11:04.181146 4895 scope.go:117] "RemoveContainer" containerID="bdfc2af1ace031accce5628b7e570e5199e5cfcbb8e3b18cb3ff0908c801fd84" Jan 29 09:11:04 crc kubenswrapper[4895]: I0129 09:11:04.212418 4895 scope.go:117] "RemoveContainer" containerID="83eccf0330358d74f34fad459119acc542635abe11e25da3b847b9dfa4ab0517" Jan 29 09:11:04 crc kubenswrapper[4895]: I0129 09:11:04.275749 4895 scope.go:117] "RemoveContainer" containerID="39430963f6521a41c42fbbb54c4a8d7756f3d578096a4b2182026b430ff2e09e" Jan 29 09:11:04 crc kubenswrapper[4895]: I0129 09:11:04.312867 4895 scope.go:117] "RemoveContainer" containerID="6a10d63066b10c1ed66835ba12fc34347a0d1a62a87d1b1a284cc96a885d4cfd" Jan 29 09:11:04 crc kubenswrapper[4895]: I0129 09:11:04.341776 4895 scope.go:117] "RemoveContainer" containerID="e0c43323de8fe4fa4495db4d692a52ee00d1dc8097a6caaacb6ef5f71b274326" Jan 29 09:11:10 crc kubenswrapper[4895]: I0129 09:11:10.061876 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-xbr7n"] Jan 29 09:11:10 crc kubenswrapper[4895]: I0129 09:11:10.079299 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-0f17-account-create-update-htqfc"] Jan 29 09:11:10 crc kubenswrapper[4895]: I0129 09:11:10.092106 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-0f17-account-create-update-htqfc"] Jan 29 09:11:10 crc kubenswrapper[4895]: I0129 09:11:10.113645 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-xbr7n"] Jan 29 09:11:10 crc kubenswrapper[4895]: I0129 09:11:10.211434 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:11:10 crc kubenswrapper[4895]: E0129 09:11:10.212124 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:11:11 crc kubenswrapper[4895]: I0129 09:11:11.231019 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="237a1a8f-6944-49c1-bd88-805e164ef454" path="/var/lib/kubelet/pods/237a1a8f-6944-49c1-bd88-805e164ef454/volumes" Jan 29 09:11:11 crc kubenswrapper[4895]: I0129 09:11:11.231722 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6622d76d-c899-462e-a3ee-137c25e2a9ad" path="/var/lib/kubelet/pods/6622d76d-c899-462e-a3ee-137c25e2a9ad/volumes" Jan 29 09:11:23 crc kubenswrapper[4895]: I0129 09:11:23.212977 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:11:23 crc kubenswrapper[4895]: E0129 09:11:23.214024 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:11:37 crc kubenswrapper[4895]: I0129 09:11:37.212809 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:11:37 crc kubenswrapper[4895]: E0129 09:11:37.216037 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:11:38 crc kubenswrapper[4895]: I0129 09:11:38.107367 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8dl7n"] Jan 29 09:11:38 crc kubenswrapper[4895]: I0129 09:11:38.138353 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8dl7n"] Jan 29 09:11:38 crc kubenswrapper[4895]: I0129 09:11:38.151041 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-jlpfn"] Jan 29 09:11:38 crc kubenswrapper[4895]: I0129 09:11:38.165673 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-jlpfn"] Jan 29 09:11:39 crc kubenswrapper[4895]: I0129 09:11:39.223776 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19c72a82-d987-4759-9d4f-be17355af27e" path="/var/lib/kubelet/pods/19c72a82-d987-4759-9d4f-be17355af27e/volumes" Jan 29 09:11:39 crc kubenswrapper[4895]: I0129 09:11:39.224445 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa" path="/var/lib/kubelet/pods/1ed61cb3-28c8-4df7-9a87-e4fccf4dfdaa/volumes" Jan 29 09:11:41 crc kubenswrapper[4895]: I0129 09:11:41.059088 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-qd4f9"] Jan 29 09:11:41 crc kubenswrapper[4895]: I0129 09:11:41.070391 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-qd4f9"] Jan 29 09:11:41 crc kubenswrapper[4895]: I0129 09:11:41.225725 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1077fb9f-c6a3-416d-a7c9-011dd8954ab1" path="/var/lib/kubelet/pods/1077fb9f-c6a3-416d-a7c9-011dd8954ab1/volumes" Jan 29 09:11:48 crc kubenswrapper[4895]: I0129 09:11:48.051979 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-95b5h"] Jan 29 09:11:48 crc kubenswrapper[4895]: I0129 09:11:48.062627 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-95b5h"] Jan 29 09:11:49 crc kubenswrapper[4895]: I0129 09:11:49.226949 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75739842-3c97-4e9a-b13b-4fb5929461b8" path="/var/lib/kubelet/pods/75739842-3c97-4e9a-b13b-4fb5929461b8/volumes" Jan 29 09:11:51 crc kubenswrapper[4895]: I0129 09:11:51.212097 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:11:51 crc kubenswrapper[4895]: E0129 09:11:51.214243 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:12:02 crc kubenswrapper[4895]: I0129 09:12:02.042847 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-xqv89"] Jan 29 09:12:02 crc kubenswrapper[4895]: I0129 09:12:02.055866 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-xqv89"] Jan 29 09:12:03 crc kubenswrapper[4895]: I0129 09:12:03.228176 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3fc9317-29e0-4b1c-b598-5d95fc98a1e7" path="/var/lib/kubelet/pods/d3fc9317-29e0-4b1c-b598-5d95fc98a1e7/volumes" Jan 29 09:12:04 crc kubenswrapper[4895]: I0129 09:12:04.857390 4895 scope.go:117] "RemoveContainer" containerID="54af0152ce7cfe098d9f2b30dd007a8f430e16751b834941ccc4c44c323a5eaa" Jan 29 09:12:04 crc kubenswrapper[4895]: I0129 09:12:04.911733 4895 scope.go:117] "RemoveContainer" containerID="19b8c230279ace9dfabd0515cf5fd30bc53be821c268523cabaef2c5c8f92a38" Jan 29 09:12:05 crc kubenswrapper[4895]: I0129 09:12:05.006836 4895 scope.go:117] "RemoveContainer" containerID="7988cad9ada011a7a3ff18077aa4aff4c5b53515df6c7184d51131d34c3eb5eb" Jan 29 09:12:05 crc kubenswrapper[4895]: I0129 09:12:05.062481 4895 scope.go:117] "RemoveContainer" containerID="ca364c4f2a5a434ec260354594ac049a4e6ec2a459bef0852d6fe25b082cda24" Jan 29 09:12:05 crc kubenswrapper[4895]: I0129 09:12:05.102671 4895 scope.go:117] "RemoveContainer" containerID="4037c578807da950a84229413bdc56f16362aef531973ed7facc667f91bf8152" Jan 29 09:12:05 crc kubenswrapper[4895]: I0129 09:12:05.183135 4895 scope.go:117] "RemoveContainer" containerID="be5eeb198159a2c69759021e019939a0140938c65c6186369aa27ac55d914e6a" Jan 29 09:12:05 crc kubenswrapper[4895]: I0129 09:12:05.219381 4895 scope.go:117] "RemoveContainer" containerID="120e4051586881d34b1aeb09a36b8d00c02da0d4e2fc0f47da31bc517f1b0cc8" Jan 29 09:12:06 crc kubenswrapper[4895]: I0129 09:12:06.212686 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:12:06 crc kubenswrapper[4895]: E0129 09:12:06.213232 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:12:08 crc kubenswrapper[4895]: I0129 09:12:08.035758 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-3fd2-account-create-update-86dsg"] Jan 29 09:12:08 crc kubenswrapper[4895]: I0129 09:12:08.049635 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-3fd2-account-create-update-86dsg"] Jan 29 09:12:08 crc kubenswrapper[4895]: I0129 09:12:08.060619 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-create-645lz"] Jan 29 09:12:08 crc kubenswrapper[4895]: I0129 09:12:08.069172 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-create-645lz"] Jan 29 09:12:09 crc kubenswrapper[4895]: I0129 09:12:09.223234 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0854dd31-b444-4b7a-b397-7d623977d1f5" path="/var/lib/kubelet/pods/0854dd31-b444-4b7a-b397-7d623977d1f5/volumes" Jan 29 09:12:09 crc kubenswrapper[4895]: I0129 09:12:09.224636 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e059385f-a716-4bd6-96ff-825e1fac5216" path="/var/lib/kubelet/pods/e059385f-a716-4bd6-96ff-825e1fac5216/volumes" Jan 29 09:12:18 crc kubenswrapper[4895]: I0129 09:12:18.211251 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:12:18 crc kubenswrapper[4895]: E0129 09:12:18.212314 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.321522 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-49jmc"] Jan 29 09:12:27 crc kubenswrapper[4895]: E0129 09:12:27.322784 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2096686-0f0e-4a3d-ac33-e700d3c76753" containerName="registry-server" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.322805 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2096686-0f0e-4a3d-ac33-e700d3c76753" containerName="registry-server" Jan 29 09:12:27 crc kubenswrapper[4895]: E0129 09:12:27.322820 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2096686-0f0e-4a3d-ac33-e700d3c76753" containerName="extract-utilities" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.322827 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2096686-0f0e-4a3d-ac33-e700d3c76753" containerName="extract-utilities" Jan 29 09:12:27 crc kubenswrapper[4895]: E0129 09:12:27.322845 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2096686-0f0e-4a3d-ac33-e700d3c76753" containerName="extract-content" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.322854 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2096686-0f0e-4a3d-ac33-e700d3c76753" containerName="extract-content" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.323101 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2096686-0f0e-4a3d-ac33-e700d3c76753" containerName="registry-server" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.324740 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.335964 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-49jmc"] Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.365573 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45557ff-dec5-42ea-b306-4847b705e068-catalog-content\") pod \"certified-operators-49jmc\" (UID: \"b45557ff-dec5-42ea-b306-4847b705e068\") " pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.365706 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45557ff-dec5-42ea-b306-4847b705e068-utilities\") pod \"certified-operators-49jmc\" (UID: \"b45557ff-dec5-42ea-b306-4847b705e068\") " pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.365837 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smdkf\" (UniqueName: \"kubernetes.io/projected/b45557ff-dec5-42ea-b306-4847b705e068-kube-api-access-smdkf\") pod \"certified-operators-49jmc\" (UID: \"b45557ff-dec5-42ea-b306-4847b705e068\") " pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.467445 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45557ff-dec5-42ea-b306-4847b705e068-catalog-content\") pod \"certified-operators-49jmc\" (UID: \"b45557ff-dec5-42ea-b306-4847b705e068\") " pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.467542 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45557ff-dec5-42ea-b306-4847b705e068-utilities\") pod \"certified-operators-49jmc\" (UID: \"b45557ff-dec5-42ea-b306-4847b705e068\") " pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.467642 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smdkf\" (UniqueName: \"kubernetes.io/projected/b45557ff-dec5-42ea-b306-4847b705e068-kube-api-access-smdkf\") pod \"certified-operators-49jmc\" (UID: \"b45557ff-dec5-42ea-b306-4847b705e068\") " pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.468264 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45557ff-dec5-42ea-b306-4847b705e068-catalog-content\") pod \"certified-operators-49jmc\" (UID: \"b45557ff-dec5-42ea-b306-4847b705e068\") " pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.468315 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45557ff-dec5-42ea-b306-4847b705e068-utilities\") pod \"certified-operators-49jmc\" (UID: \"b45557ff-dec5-42ea-b306-4847b705e068\") " pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.491128 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smdkf\" (UniqueName: \"kubernetes.io/projected/b45557ff-dec5-42ea-b306-4847b705e068-kube-api-access-smdkf\") pod \"certified-operators-49jmc\" (UID: \"b45557ff-dec5-42ea-b306-4847b705e068\") " pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:27 crc kubenswrapper[4895]: I0129 09:12:27.653731 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:28 crc kubenswrapper[4895]: I0129 09:12:28.186371 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-49jmc"] Jan 29 09:12:28 crc kubenswrapper[4895]: I0129 09:12:28.741465 4895 generic.go:334] "Generic (PLEG): container finished" podID="b45557ff-dec5-42ea-b306-4847b705e068" containerID="279a3e7147d0f0b74d02c3b0b61a3271c630ba9424557345581d3cbdc4b7705f" exitCode=0 Jan 29 09:12:28 crc kubenswrapper[4895]: I0129 09:12:28.741519 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49jmc" event={"ID":"b45557ff-dec5-42ea-b306-4847b705e068","Type":"ContainerDied","Data":"279a3e7147d0f0b74d02c3b0b61a3271c630ba9424557345581d3cbdc4b7705f"} Jan 29 09:12:28 crc kubenswrapper[4895]: I0129 09:12:28.741552 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49jmc" event={"ID":"b45557ff-dec5-42ea-b306-4847b705e068","Type":"ContainerStarted","Data":"6b08f5516fa94c5a92de0fac9e7706c9c8f3cd2863d063aaf89ecb2910b46c37"} Jan 29 09:12:29 crc kubenswrapper[4895]: I0129 09:12:29.758317 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49jmc" event={"ID":"b45557ff-dec5-42ea-b306-4847b705e068","Type":"ContainerStarted","Data":"22874382ea3e00ffc76a86feffc8c858445bed2a1212ba5e7ffb4317e3e1edad"} Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.296712 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:12:30 crc kubenswrapper[4895]: E0129 09:12:30.297248 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.343023 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nkt8p"] Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.345993 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.366764 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nkt8p"] Jan 29 09:12:30 crc kubenswrapper[4895]: E0129 09:12:30.413840 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb45557ff_dec5_42ea_b306_4847b705e068.slice/crio-22874382ea3e00ffc76a86feffc8c858445bed2a1212ba5e7ffb4317e3e1edad.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb45557ff_dec5_42ea_b306_4847b705e068.slice/crio-conmon-22874382ea3e00ffc76a86feffc8c858445bed2a1212ba5e7ffb4317e3e1edad.scope\": RecentStats: unable to find data in memory cache]" Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.507261 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4606f8d5-072f-444a-b7ec-07a696b03640-catalog-content\") pod \"redhat-operators-nkt8p\" (UID: \"4606f8d5-072f-444a-b7ec-07a696b03640\") " pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.507361 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpj8m\" (UniqueName: \"kubernetes.io/projected/4606f8d5-072f-444a-b7ec-07a696b03640-kube-api-access-mpj8m\") pod \"redhat-operators-nkt8p\" (UID: \"4606f8d5-072f-444a-b7ec-07a696b03640\") " pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.507426 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4606f8d5-072f-444a-b7ec-07a696b03640-utilities\") pod \"redhat-operators-nkt8p\" (UID: \"4606f8d5-072f-444a-b7ec-07a696b03640\") " pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.609611 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4606f8d5-072f-444a-b7ec-07a696b03640-catalog-content\") pod \"redhat-operators-nkt8p\" (UID: \"4606f8d5-072f-444a-b7ec-07a696b03640\") " pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.609713 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpj8m\" (UniqueName: \"kubernetes.io/projected/4606f8d5-072f-444a-b7ec-07a696b03640-kube-api-access-mpj8m\") pod \"redhat-operators-nkt8p\" (UID: \"4606f8d5-072f-444a-b7ec-07a696b03640\") " pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.609764 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4606f8d5-072f-444a-b7ec-07a696b03640-utilities\") pod \"redhat-operators-nkt8p\" (UID: \"4606f8d5-072f-444a-b7ec-07a696b03640\") " pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.610245 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4606f8d5-072f-444a-b7ec-07a696b03640-catalog-content\") pod \"redhat-operators-nkt8p\" (UID: \"4606f8d5-072f-444a-b7ec-07a696b03640\") " pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.610471 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4606f8d5-072f-444a-b7ec-07a696b03640-utilities\") pod \"redhat-operators-nkt8p\" (UID: \"4606f8d5-072f-444a-b7ec-07a696b03640\") " pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.639831 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpj8m\" (UniqueName: \"kubernetes.io/projected/4606f8d5-072f-444a-b7ec-07a696b03640-kube-api-access-mpj8m\") pod \"redhat-operators-nkt8p\" (UID: \"4606f8d5-072f-444a-b7ec-07a696b03640\") " pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.709782 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.774524 4895 generic.go:334] "Generic (PLEG): container finished" podID="b45557ff-dec5-42ea-b306-4847b705e068" containerID="22874382ea3e00ffc76a86feffc8c858445bed2a1212ba5e7ffb4317e3e1edad" exitCode=0 Jan 29 09:12:30 crc kubenswrapper[4895]: I0129 09:12:30.774588 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49jmc" event={"ID":"b45557ff-dec5-42ea-b306-4847b705e068","Type":"ContainerDied","Data":"22874382ea3e00ffc76a86feffc8c858445bed2a1212ba5e7ffb4317e3e1edad"} Jan 29 09:12:31 crc kubenswrapper[4895]: I0129 09:12:31.302174 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nkt8p"] Jan 29 09:12:31 crc kubenswrapper[4895]: I0129 09:12:31.788477 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49jmc" event={"ID":"b45557ff-dec5-42ea-b306-4847b705e068","Type":"ContainerStarted","Data":"90e04586d5b7ae252459cedda469d4699e06a1c19a43f105292b2c611da459f4"} Jan 29 09:12:31 crc kubenswrapper[4895]: I0129 09:12:31.790477 4895 generic.go:334] "Generic (PLEG): container finished" podID="4606f8d5-072f-444a-b7ec-07a696b03640" containerID="5b0cade90728bb5f051d060272f58ed144ac968f1a1c84c3d6e98478ecfa8b9a" exitCode=0 Jan 29 09:12:31 crc kubenswrapper[4895]: I0129 09:12:31.790537 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkt8p" event={"ID":"4606f8d5-072f-444a-b7ec-07a696b03640","Type":"ContainerDied","Data":"5b0cade90728bb5f051d060272f58ed144ac968f1a1c84c3d6e98478ecfa8b9a"} Jan 29 09:12:31 crc kubenswrapper[4895]: I0129 09:12:31.790577 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkt8p" event={"ID":"4606f8d5-072f-444a-b7ec-07a696b03640","Type":"ContainerStarted","Data":"426b404fb08a1200c211df670bc692e3555cddda04d7d5d00d9425c664510502"} Jan 29 09:12:31 crc kubenswrapper[4895]: I0129 09:12:31.831122 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-49jmc" podStartSLOduration=2.245989462 podStartE2EDuration="4.831089129s" podCreationTimestamp="2026-01-29 09:12:27 +0000 UTC" firstStartedPulling="2026-01-29 09:12:28.744366518 +0000 UTC m=+1890.385874664" lastFinishedPulling="2026-01-29 09:12:31.329466175 +0000 UTC m=+1892.970974331" observedRunningTime="2026-01-29 09:12:31.822582512 +0000 UTC m=+1893.464090658" watchObservedRunningTime="2026-01-29 09:12:31.831089129 +0000 UTC m=+1893.472597285" Jan 29 09:12:33 crc kubenswrapper[4895]: I0129 09:12:33.817459 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkt8p" event={"ID":"4606f8d5-072f-444a-b7ec-07a696b03640","Type":"ContainerStarted","Data":"84fd34b08764df65f924dbe324e6f92fb14a5d0fcbef968fcef24b37e5ede848"} Jan 29 09:12:34 crc kubenswrapper[4895]: I0129 09:12:34.845573 4895 generic.go:334] "Generic (PLEG): container finished" podID="4606f8d5-072f-444a-b7ec-07a696b03640" containerID="84fd34b08764df65f924dbe324e6f92fb14a5d0fcbef968fcef24b37e5ede848" exitCode=0 Jan 29 09:12:34 crc kubenswrapper[4895]: I0129 09:12:34.845630 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkt8p" event={"ID":"4606f8d5-072f-444a-b7ec-07a696b03640","Type":"ContainerDied","Data":"84fd34b08764df65f924dbe324e6f92fb14a5d0fcbef968fcef24b37e5ede848"} Jan 29 09:12:35 crc kubenswrapper[4895]: I0129 09:12:35.860849 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkt8p" event={"ID":"4606f8d5-072f-444a-b7ec-07a696b03640","Type":"ContainerStarted","Data":"64a4b160341fb0acfd12e1e007b7e70202721f4c95896d9160deb9318a932ff2"} Jan 29 09:12:35 crc kubenswrapper[4895]: I0129 09:12:35.881747 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nkt8p" podStartSLOduration=2.4006142759999998 podStartE2EDuration="5.881725056s" podCreationTimestamp="2026-01-29 09:12:30 +0000 UTC" firstStartedPulling="2026-01-29 09:12:31.793172326 +0000 UTC m=+1893.434680472" lastFinishedPulling="2026-01-29 09:12:35.274283106 +0000 UTC m=+1896.915791252" observedRunningTime="2026-01-29 09:12:35.879763683 +0000 UTC m=+1897.521271829" watchObservedRunningTime="2026-01-29 09:12:35.881725056 +0000 UTC m=+1897.523233202" Jan 29 09:12:37 crc kubenswrapper[4895]: I0129 09:12:37.654701 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:37 crc kubenswrapper[4895]: I0129 09:12:37.655188 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:37 crc kubenswrapper[4895]: I0129 09:12:37.718211 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:37 crc kubenswrapper[4895]: I0129 09:12:37.937026 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:38 crc kubenswrapper[4895]: I0129 09:12:38.501200 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-49jmc"] Jan 29 09:12:39 crc kubenswrapper[4895]: I0129 09:12:39.926434 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-49jmc" podUID="b45557ff-dec5-42ea-b306-4847b705e068" containerName="registry-server" containerID="cri-o://90e04586d5b7ae252459cedda469d4699e06a1c19a43f105292b2c611da459f4" gracePeriod=2 Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.499376 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.616908 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45557ff-dec5-42ea-b306-4847b705e068-utilities\") pod \"b45557ff-dec5-42ea-b306-4847b705e068\" (UID: \"b45557ff-dec5-42ea-b306-4847b705e068\") " Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.617019 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45557ff-dec5-42ea-b306-4847b705e068-catalog-content\") pod \"b45557ff-dec5-42ea-b306-4847b705e068\" (UID: \"b45557ff-dec5-42ea-b306-4847b705e068\") " Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.617172 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smdkf\" (UniqueName: \"kubernetes.io/projected/b45557ff-dec5-42ea-b306-4847b705e068-kube-api-access-smdkf\") pod \"b45557ff-dec5-42ea-b306-4847b705e068\" (UID: \"b45557ff-dec5-42ea-b306-4847b705e068\") " Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.617822 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b45557ff-dec5-42ea-b306-4847b705e068-utilities" (OuterVolumeSpecName: "utilities") pod "b45557ff-dec5-42ea-b306-4847b705e068" (UID: "b45557ff-dec5-42ea-b306-4847b705e068"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.618531 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45557ff-dec5-42ea-b306-4847b705e068-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.627100 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b45557ff-dec5-42ea-b306-4847b705e068-kube-api-access-smdkf" (OuterVolumeSpecName: "kube-api-access-smdkf") pod "b45557ff-dec5-42ea-b306-4847b705e068" (UID: "b45557ff-dec5-42ea-b306-4847b705e068"). InnerVolumeSpecName "kube-api-access-smdkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.667632 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b45557ff-dec5-42ea-b306-4847b705e068-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b45557ff-dec5-42ea-b306-4847b705e068" (UID: "b45557ff-dec5-42ea-b306-4847b705e068"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.712605 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.714070 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.720306 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45557ff-dec5-42ea-b306-4847b705e068-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.720379 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smdkf\" (UniqueName: \"kubernetes.io/projected/b45557ff-dec5-42ea-b306-4847b705e068-kube-api-access-smdkf\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.950381 4895 generic.go:334] "Generic (PLEG): container finished" podID="b45557ff-dec5-42ea-b306-4847b705e068" containerID="90e04586d5b7ae252459cedda469d4699e06a1c19a43f105292b2c611da459f4" exitCode=0 Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.950509 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-49jmc" Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.950504 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49jmc" event={"ID":"b45557ff-dec5-42ea-b306-4847b705e068","Type":"ContainerDied","Data":"90e04586d5b7ae252459cedda469d4699e06a1c19a43f105292b2c611da459f4"} Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.950966 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49jmc" event={"ID":"b45557ff-dec5-42ea-b306-4847b705e068","Type":"ContainerDied","Data":"6b08f5516fa94c5a92de0fac9e7706c9c8f3cd2863d063aaf89ecb2910b46c37"} Jan 29 09:12:40 crc kubenswrapper[4895]: I0129 09:12:40.950994 4895 scope.go:117] "RemoveContainer" containerID="90e04586d5b7ae252459cedda469d4699e06a1c19a43f105292b2c611da459f4" Jan 29 09:12:41 crc kubenswrapper[4895]: I0129 09:12:41.257597 4895 scope.go:117] "RemoveContainer" containerID="22874382ea3e00ffc76a86feffc8c858445bed2a1212ba5e7ffb4317e3e1edad" Jan 29 09:12:41 crc kubenswrapper[4895]: I0129 09:12:41.308139 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-49jmc"] Jan 29 09:12:41 crc kubenswrapper[4895]: I0129 09:12:41.308873 4895 scope.go:117] "RemoveContainer" containerID="279a3e7147d0f0b74d02c3b0b61a3271c630ba9424557345581d3cbdc4b7705f" Jan 29 09:12:41 crc kubenswrapper[4895]: I0129 09:12:41.332642 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-49jmc"] Jan 29 09:12:41 crc kubenswrapper[4895]: I0129 09:12:41.375669 4895 scope.go:117] "RemoveContainer" containerID="90e04586d5b7ae252459cedda469d4699e06a1c19a43f105292b2c611da459f4" Jan 29 09:12:41 crc kubenswrapper[4895]: E0129 09:12:41.378015 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e04586d5b7ae252459cedda469d4699e06a1c19a43f105292b2c611da459f4\": container with ID starting with 90e04586d5b7ae252459cedda469d4699e06a1c19a43f105292b2c611da459f4 not found: ID does not exist" containerID="90e04586d5b7ae252459cedda469d4699e06a1c19a43f105292b2c611da459f4" Jan 29 09:12:41 crc kubenswrapper[4895]: I0129 09:12:41.378064 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e04586d5b7ae252459cedda469d4699e06a1c19a43f105292b2c611da459f4"} err="failed to get container status \"90e04586d5b7ae252459cedda469d4699e06a1c19a43f105292b2c611da459f4\": rpc error: code = NotFound desc = could not find container \"90e04586d5b7ae252459cedda469d4699e06a1c19a43f105292b2c611da459f4\": container with ID starting with 90e04586d5b7ae252459cedda469d4699e06a1c19a43f105292b2c611da459f4 not found: ID does not exist" Jan 29 09:12:41 crc kubenswrapper[4895]: I0129 09:12:41.378098 4895 scope.go:117] "RemoveContainer" containerID="22874382ea3e00ffc76a86feffc8c858445bed2a1212ba5e7ffb4317e3e1edad" Jan 29 09:12:41 crc kubenswrapper[4895]: E0129 09:12:41.378395 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22874382ea3e00ffc76a86feffc8c858445bed2a1212ba5e7ffb4317e3e1edad\": container with ID starting with 22874382ea3e00ffc76a86feffc8c858445bed2a1212ba5e7ffb4317e3e1edad not found: ID does not exist" containerID="22874382ea3e00ffc76a86feffc8c858445bed2a1212ba5e7ffb4317e3e1edad" Jan 29 09:12:41 crc kubenswrapper[4895]: I0129 09:12:41.378425 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22874382ea3e00ffc76a86feffc8c858445bed2a1212ba5e7ffb4317e3e1edad"} err="failed to get container status \"22874382ea3e00ffc76a86feffc8c858445bed2a1212ba5e7ffb4317e3e1edad\": rpc error: code = NotFound desc = could not find container \"22874382ea3e00ffc76a86feffc8c858445bed2a1212ba5e7ffb4317e3e1edad\": container with ID starting with 22874382ea3e00ffc76a86feffc8c858445bed2a1212ba5e7ffb4317e3e1edad not found: ID does not exist" Jan 29 09:12:41 crc kubenswrapper[4895]: I0129 09:12:41.378507 4895 scope.go:117] "RemoveContainer" containerID="279a3e7147d0f0b74d02c3b0b61a3271c630ba9424557345581d3cbdc4b7705f" Jan 29 09:12:41 crc kubenswrapper[4895]: E0129 09:12:41.378823 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"279a3e7147d0f0b74d02c3b0b61a3271c630ba9424557345581d3cbdc4b7705f\": container with ID starting with 279a3e7147d0f0b74d02c3b0b61a3271c630ba9424557345581d3cbdc4b7705f not found: ID does not exist" containerID="279a3e7147d0f0b74d02c3b0b61a3271c630ba9424557345581d3cbdc4b7705f" Jan 29 09:12:41 crc kubenswrapper[4895]: I0129 09:12:41.378857 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279a3e7147d0f0b74d02c3b0b61a3271c630ba9424557345581d3cbdc4b7705f"} err="failed to get container status \"279a3e7147d0f0b74d02c3b0b61a3271c630ba9424557345581d3cbdc4b7705f\": rpc error: code = NotFound desc = could not find container \"279a3e7147d0f0b74d02c3b0b61a3271c630ba9424557345581d3cbdc4b7705f\": container with ID starting with 279a3e7147d0f0b74d02c3b0b61a3271c630ba9424557345581d3cbdc4b7705f not found: ID does not exist" Jan 29 09:12:41 crc kubenswrapper[4895]: I0129 09:12:41.774393 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nkt8p" podUID="4606f8d5-072f-444a-b7ec-07a696b03640" containerName="registry-server" probeResult="failure" output=< Jan 29 09:12:41 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 09:12:41 crc kubenswrapper[4895]: > Jan 29 09:12:43 crc kubenswrapper[4895]: I0129 09:12:43.335799 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45557ff-dec5-42ea-b306-4847b705e068" path="/var/lib/kubelet/pods/b45557ff-dec5-42ea-b306-4847b705e068/volumes" Jan 29 09:12:45 crc kubenswrapper[4895]: I0129 09:12:45.211883 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:12:45 crc kubenswrapper[4895]: E0129 09:12:45.213334 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:12:51 crc kubenswrapper[4895]: I0129 09:12:51.770208 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nkt8p" podUID="4606f8d5-072f-444a-b7ec-07a696b03640" containerName="registry-server" probeResult="failure" output=< Jan 29 09:12:51 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 09:12:51 crc kubenswrapper[4895]: > Jan 29 09:12:58 crc kubenswrapper[4895]: I0129 09:12:58.212351 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:12:58 crc kubenswrapper[4895]: I0129 09:12:58.545135 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerStarted","Data":"4ebd8df48bf0fb45ec83588a7131b73c01eddb922c6408acfd935865f5db90c6"} Jan 29 09:13:00 crc kubenswrapper[4895]: I0129 09:13:00.774369 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:13:00 crc kubenswrapper[4895]: I0129 09:13:00.845448 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:13:01 crc kubenswrapper[4895]: I0129 09:13:01.730142 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nkt8p"] Jan 29 09:13:02 crc kubenswrapper[4895]: I0129 09:13:02.586057 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nkt8p" podUID="4606f8d5-072f-444a-b7ec-07a696b03640" containerName="registry-server" containerID="cri-o://64a4b160341fb0acfd12e1e007b7e70202721f4c95896d9160deb9318a932ff2" gracePeriod=2 Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.119240 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.324349 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4606f8d5-072f-444a-b7ec-07a696b03640-utilities\") pod \"4606f8d5-072f-444a-b7ec-07a696b03640\" (UID: \"4606f8d5-072f-444a-b7ec-07a696b03640\") " Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.324585 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpj8m\" (UniqueName: \"kubernetes.io/projected/4606f8d5-072f-444a-b7ec-07a696b03640-kube-api-access-mpj8m\") pod \"4606f8d5-072f-444a-b7ec-07a696b03640\" (UID: \"4606f8d5-072f-444a-b7ec-07a696b03640\") " Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.324748 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4606f8d5-072f-444a-b7ec-07a696b03640-catalog-content\") pod \"4606f8d5-072f-444a-b7ec-07a696b03640\" (UID: \"4606f8d5-072f-444a-b7ec-07a696b03640\") " Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.336544 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4606f8d5-072f-444a-b7ec-07a696b03640-kube-api-access-mpj8m" (OuterVolumeSpecName: "kube-api-access-mpj8m") pod "4606f8d5-072f-444a-b7ec-07a696b03640" (UID: "4606f8d5-072f-444a-b7ec-07a696b03640"). InnerVolumeSpecName "kube-api-access-mpj8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.338338 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4606f8d5-072f-444a-b7ec-07a696b03640-utilities" (OuterVolumeSpecName: "utilities") pod "4606f8d5-072f-444a-b7ec-07a696b03640" (UID: "4606f8d5-072f-444a-b7ec-07a696b03640"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.442954 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4606f8d5-072f-444a-b7ec-07a696b03640-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.443017 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpj8m\" (UniqueName: \"kubernetes.io/projected/4606f8d5-072f-444a-b7ec-07a696b03640-kube-api-access-mpj8m\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.569070 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4606f8d5-072f-444a-b7ec-07a696b03640-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4606f8d5-072f-444a-b7ec-07a696b03640" (UID: "4606f8d5-072f-444a-b7ec-07a696b03640"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.602703 4895 generic.go:334] "Generic (PLEG): container finished" podID="4606f8d5-072f-444a-b7ec-07a696b03640" containerID="64a4b160341fb0acfd12e1e007b7e70202721f4c95896d9160deb9318a932ff2" exitCode=0 Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.602778 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkt8p" event={"ID":"4606f8d5-072f-444a-b7ec-07a696b03640","Type":"ContainerDied","Data":"64a4b160341fb0acfd12e1e007b7e70202721f4c95896d9160deb9318a932ff2"} Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.603208 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkt8p" event={"ID":"4606f8d5-072f-444a-b7ec-07a696b03640","Type":"ContainerDied","Data":"426b404fb08a1200c211df670bc692e3555cddda04d7d5d00d9425c664510502"} Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.603242 4895 scope.go:117] "RemoveContainer" containerID="64a4b160341fb0acfd12e1e007b7e70202721f4c95896d9160deb9318a932ff2" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.602814 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkt8p" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.646362 4895 scope.go:117] "RemoveContainer" containerID="84fd34b08764df65f924dbe324e6f92fb14a5d0fcbef968fcef24b37e5ede848" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.649930 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4606f8d5-072f-444a-b7ec-07a696b03640-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.658705 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nkt8p"] Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.674331 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nkt8p"] Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.692906 4895 scope.go:117] "RemoveContainer" containerID="5b0cade90728bb5f051d060272f58ed144ac968f1a1c84c3d6e98478ecfa8b9a" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.763087 4895 scope.go:117] "RemoveContainer" containerID="64a4b160341fb0acfd12e1e007b7e70202721f4c95896d9160deb9318a932ff2" Jan 29 09:13:03 crc kubenswrapper[4895]: E0129 09:13:03.763761 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64a4b160341fb0acfd12e1e007b7e70202721f4c95896d9160deb9318a932ff2\": container with ID starting with 64a4b160341fb0acfd12e1e007b7e70202721f4c95896d9160deb9318a932ff2 not found: ID does not exist" containerID="64a4b160341fb0acfd12e1e007b7e70202721f4c95896d9160deb9318a932ff2" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.763811 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a4b160341fb0acfd12e1e007b7e70202721f4c95896d9160deb9318a932ff2"} err="failed to get container status \"64a4b160341fb0acfd12e1e007b7e70202721f4c95896d9160deb9318a932ff2\": rpc error: code = NotFound desc = could not find container \"64a4b160341fb0acfd12e1e007b7e70202721f4c95896d9160deb9318a932ff2\": container with ID starting with 64a4b160341fb0acfd12e1e007b7e70202721f4c95896d9160deb9318a932ff2 not found: ID does not exist" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.763846 4895 scope.go:117] "RemoveContainer" containerID="84fd34b08764df65f924dbe324e6f92fb14a5d0fcbef968fcef24b37e5ede848" Jan 29 09:13:03 crc kubenswrapper[4895]: E0129 09:13:03.764406 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84fd34b08764df65f924dbe324e6f92fb14a5d0fcbef968fcef24b37e5ede848\": container with ID starting with 84fd34b08764df65f924dbe324e6f92fb14a5d0fcbef968fcef24b37e5ede848 not found: ID does not exist" containerID="84fd34b08764df65f924dbe324e6f92fb14a5d0fcbef968fcef24b37e5ede848" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.764478 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84fd34b08764df65f924dbe324e6f92fb14a5d0fcbef968fcef24b37e5ede848"} err="failed to get container status \"84fd34b08764df65f924dbe324e6f92fb14a5d0fcbef968fcef24b37e5ede848\": rpc error: code = NotFound desc = could not find container \"84fd34b08764df65f924dbe324e6f92fb14a5d0fcbef968fcef24b37e5ede848\": container with ID starting with 84fd34b08764df65f924dbe324e6f92fb14a5d0fcbef968fcef24b37e5ede848 not found: ID does not exist" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.764567 4895 scope.go:117] "RemoveContainer" containerID="5b0cade90728bb5f051d060272f58ed144ac968f1a1c84c3d6e98478ecfa8b9a" Jan 29 09:13:03 crc kubenswrapper[4895]: E0129 09:13:03.766519 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b0cade90728bb5f051d060272f58ed144ac968f1a1c84c3d6e98478ecfa8b9a\": container with ID starting with 5b0cade90728bb5f051d060272f58ed144ac968f1a1c84c3d6e98478ecfa8b9a not found: ID does not exist" containerID="5b0cade90728bb5f051d060272f58ed144ac968f1a1c84c3d6e98478ecfa8b9a" Jan 29 09:13:03 crc kubenswrapper[4895]: I0129 09:13:03.766577 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b0cade90728bb5f051d060272f58ed144ac968f1a1c84c3d6e98478ecfa8b9a"} err="failed to get container status \"5b0cade90728bb5f051d060272f58ed144ac968f1a1c84c3d6e98478ecfa8b9a\": rpc error: code = NotFound desc = could not find container \"5b0cade90728bb5f051d060272f58ed144ac968f1a1c84c3d6e98478ecfa8b9a\": container with ID starting with 5b0cade90728bb5f051d060272f58ed144ac968f1a1c84c3d6e98478ecfa8b9a not found: ID does not exist" Jan 29 09:13:05 crc kubenswrapper[4895]: I0129 09:13:05.054511 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-432f-account-create-update-rqswk"] Jan 29 09:13:05 crc kubenswrapper[4895]: I0129 09:13:05.064452 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-432f-account-create-update-rqswk"] Jan 29 09:13:05 crc kubenswrapper[4895]: I0129 09:13:05.235658 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4260798e-471a-4a37-8a59-e4c5842d7ea5" path="/var/lib/kubelet/pods/4260798e-471a-4a37-8a59-e4c5842d7ea5/volumes" Jan 29 09:13:05 crc kubenswrapper[4895]: I0129 09:13:05.237411 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4606f8d5-072f-444a-b7ec-07a696b03640" path="/var/lib/kubelet/pods/4606f8d5-072f-444a-b7ec-07a696b03640/volumes" Jan 29 09:13:05 crc kubenswrapper[4895]: I0129 09:13:05.462238 4895 scope.go:117] "RemoveContainer" containerID="0e431e4940db82a907ad982e40e05fe07f5c7ee2915ce96cc7161c1edfa54abe" Jan 29 09:13:05 crc kubenswrapper[4895]: I0129 09:13:05.678821 4895 scope.go:117] "RemoveContainer" containerID="b360403ff3246432ef6cc95d44263e39be8969781516702aea433430bd3069a5" Jan 29 09:13:05 crc kubenswrapper[4895]: I0129 09:13:05.718529 4895 scope.go:117] "RemoveContainer" containerID="9143e449f62e512aee0d34bc04457738bff8166802e1eb4169bb13bc5b0877d5" Jan 29 09:13:06 crc kubenswrapper[4895]: I0129 09:13:06.042374 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-msmz2"] Jan 29 09:13:06 crc kubenswrapper[4895]: I0129 09:13:06.052331 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-msmz2"] Jan 29 09:13:07 crc kubenswrapper[4895]: I0129 09:13:07.042735 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b5c0-account-create-update-lmlgf"] Jan 29 09:13:07 crc kubenswrapper[4895]: I0129 09:13:07.054847 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-105a-account-create-update-s9xqr"] Jan 29 09:13:07 crc kubenswrapper[4895]: I0129 09:13:07.078174 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b5c0-account-create-update-lmlgf"] Jan 29 09:13:07 crc kubenswrapper[4895]: I0129 09:13:07.089620 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-dpcdv"] Jan 29 09:13:07 crc kubenswrapper[4895]: I0129 09:13:07.098865 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-105a-account-create-update-s9xqr"] Jan 29 09:13:07 crc kubenswrapper[4895]: I0129 09:13:07.107092 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-dpcdv"] Jan 29 09:13:07 crc kubenswrapper[4895]: I0129 09:13:07.116705 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-fnzdv"] Jan 29 09:13:07 crc kubenswrapper[4895]: I0129 09:13:07.127097 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-fnzdv"] Jan 29 09:13:07 crc kubenswrapper[4895]: I0129 09:13:07.224274 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad7a906-bb60-46a4-9cd1-edcdbc3db91d" path="/var/lib/kubelet/pods/5ad7a906-bb60-46a4-9cd1-edcdbc3db91d/volumes" Jan 29 09:13:07 crc kubenswrapper[4895]: I0129 09:13:07.225091 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81129832-d241-4127-b30b-9a54a350d12f" path="/var/lib/kubelet/pods/81129832-d241-4127-b30b-9a54a350d12f/volumes" Jan 29 09:13:07 crc kubenswrapper[4895]: I0129 09:13:07.225831 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439" path="/var/lib/kubelet/pods/9c35cd3b-b2fe-4f39-8ff3-c51f5e55f439/volumes" Jan 29 09:13:07 crc kubenswrapper[4895]: I0129 09:13:07.226568 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e22259fa-a96d-4509-9499-a569fe60a39c" path="/var/lib/kubelet/pods/e22259fa-a96d-4509-9499-a569fe60a39c/volumes" Jan 29 09:13:07 crc kubenswrapper[4895]: I0129 09:13:07.227879 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc395a7a-25bb-46cd-89b0-9bbd5b1431f7" path="/var/lib/kubelet/pods/fc395a7a-25bb-46cd-89b0-9bbd5b1431f7/volumes" Jan 29 09:14:04 crc kubenswrapper[4895]: I0129 09:14:04.052250 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6hnzn"] Jan 29 09:14:04 crc kubenswrapper[4895]: I0129 09:14:04.063303 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6hnzn"] Jan 29 09:14:05 crc kubenswrapper[4895]: I0129 09:14:05.227875 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a22873-d856-41db-84a7-909eabf0d896" path="/var/lib/kubelet/pods/38a22873-d856-41db-84a7-909eabf0d896/volumes" Jan 29 09:14:05 crc kubenswrapper[4895]: I0129 09:14:05.905390 4895 scope.go:117] "RemoveContainer" containerID="02495fe07f98fa2a7ead86a5f12c2939280a050dd3fea0e2965d0ea059f19240" Jan 29 09:14:05 crc kubenswrapper[4895]: I0129 09:14:05.937037 4895 scope.go:117] "RemoveContainer" containerID="bd723a998f06f1f7a27c7628406e6dfe66c3ac9c86a7a9bcc9f82084e02b056a" Jan 29 09:14:06 crc kubenswrapper[4895]: I0129 09:14:06.028866 4895 scope.go:117] "RemoveContainer" containerID="a82751c3f05e7e05493ed5f8f7947f34c28b351582ed21bb38257a319ec0ecaf" Jan 29 09:14:06 crc kubenswrapper[4895]: I0129 09:14:06.064303 4895 scope.go:117] "RemoveContainer" containerID="4bed6288328fdfe5e05f6ff06a39266f1340688f518753012e8d13ede234895d" Jan 29 09:14:06 crc kubenswrapper[4895]: I0129 09:14:06.118498 4895 scope.go:117] "RemoveContainer" containerID="b9d6a8a96ce1daf49ebc8ffe0d94bbdf073a017041a716d750c475cf3d4eac83" Jan 29 09:14:06 crc kubenswrapper[4895]: I0129 09:14:06.172764 4895 scope.go:117] "RemoveContainer" containerID="0cc5ce15347f94bf6758485234d0fe21f2a9c878e0068774924b85d44d094c25" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.636523 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5dzdv/must-gather-t4lwx"] Jan 29 09:14:16 crc kubenswrapper[4895]: E0129 09:14:16.637591 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45557ff-dec5-42ea-b306-4847b705e068" containerName="extract-utilities" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.637606 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45557ff-dec5-42ea-b306-4847b705e068" containerName="extract-utilities" Jan 29 09:14:16 crc kubenswrapper[4895]: E0129 09:14:16.637622 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4606f8d5-072f-444a-b7ec-07a696b03640" containerName="extract-content" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.637629 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4606f8d5-072f-444a-b7ec-07a696b03640" containerName="extract-content" Jan 29 09:14:16 crc kubenswrapper[4895]: E0129 09:14:16.637639 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4606f8d5-072f-444a-b7ec-07a696b03640" containerName="registry-server" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.637646 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4606f8d5-072f-444a-b7ec-07a696b03640" containerName="registry-server" Jan 29 09:14:16 crc kubenswrapper[4895]: E0129 09:14:16.637671 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4606f8d5-072f-444a-b7ec-07a696b03640" containerName="extract-utilities" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.637677 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4606f8d5-072f-444a-b7ec-07a696b03640" containerName="extract-utilities" Jan 29 09:14:16 crc kubenswrapper[4895]: E0129 09:14:16.637694 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45557ff-dec5-42ea-b306-4847b705e068" containerName="extract-content" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.637701 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45557ff-dec5-42ea-b306-4847b705e068" containerName="extract-content" Jan 29 09:14:16 crc kubenswrapper[4895]: E0129 09:14:16.637717 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45557ff-dec5-42ea-b306-4847b705e068" containerName="registry-server" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.637723 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45557ff-dec5-42ea-b306-4847b705e068" containerName="registry-server" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.637921 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4606f8d5-072f-444a-b7ec-07a696b03640" containerName="registry-server" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.637952 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45557ff-dec5-42ea-b306-4847b705e068" containerName="registry-server" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.639192 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/must-gather-t4lwx" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.643606 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5dzdv"/"kube-root-ca.crt" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.643939 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5dzdv"/"default-dockercfg-hz8gz" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.644184 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5dzdv"/"openshift-service-ca.crt" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.655316 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5dzdv/must-gather-t4lwx"] Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.779779 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98b775ab-6e2f-42a9-91f7-1952e78a0dff-must-gather-output\") pod \"must-gather-t4lwx\" (UID: \"98b775ab-6e2f-42a9-91f7-1952e78a0dff\") " pod="openshift-must-gather-5dzdv/must-gather-t4lwx" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.779851 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thlmz\" (UniqueName: \"kubernetes.io/projected/98b775ab-6e2f-42a9-91f7-1952e78a0dff-kube-api-access-thlmz\") pod \"must-gather-t4lwx\" (UID: \"98b775ab-6e2f-42a9-91f7-1952e78a0dff\") " pod="openshift-must-gather-5dzdv/must-gather-t4lwx" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.881874 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98b775ab-6e2f-42a9-91f7-1952e78a0dff-must-gather-output\") pod \"must-gather-t4lwx\" (UID: \"98b775ab-6e2f-42a9-91f7-1952e78a0dff\") " pod="openshift-must-gather-5dzdv/must-gather-t4lwx" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.882035 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thlmz\" (UniqueName: \"kubernetes.io/projected/98b775ab-6e2f-42a9-91f7-1952e78a0dff-kube-api-access-thlmz\") pod \"must-gather-t4lwx\" (UID: \"98b775ab-6e2f-42a9-91f7-1952e78a0dff\") " pod="openshift-must-gather-5dzdv/must-gather-t4lwx" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.882493 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98b775ab-6e2f-42a9-91f7-1952e78a0dff-must-gather-output\") pod \"must-gather-t4lwx\" (UID: \"98b775ab-6e2f-42a9-91f7-1952e78a0dff\") " pod="openshift-must-gather-5dzdv/must-gather-t4lwx" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.905458 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thlmz\" (UniqueName: \"kubernetes.io/projected/98b775ab-6e2f-42a9-91f7-1952e78a0dff-kube-api-access-thlmz\") pod \"must-gather-t4lwx\" (UID: \"98b775ab-6e2f-42a9-91f7-1952e78a0dff\") " pod="openshift-must-gather-5dzdv/must-gather-t4lwx" Jan 29 09:14:16 crc kubenswrapper[4895]: I0129 09:14:16.963776 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/must-gather-t4lwx" Jan 29 09:14:17 crc kubenswrapper[4895]: I0129 09:14:17.723105 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5dzdv/must-gather-t4lwx"] Jan 29 09:14:17 crc kubenswrapper[4895]: I0129 09:14:17.737591 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:14:18 crc kubenswrapper[4895]: I0129 09:14:18.489185 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5dzdv/must-gather-t4lwx" event={"ID":"98b775ab-6e2f-42a9-91f7-1952e78a0dff","Type":"ContainerStarted","Data":"c6d00586b57a787d5b6e27393b03a77e6c462d5419a72df86830c6fb43c93c26"} Jan 29 09:14:25 crc kubenswrapper[4895]: I0129 09:14:25.579118 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5dzdv/must-gather-t4lwx" event={"ID":"98b775ab-6e2f-42a9-91f7-1952e78a0dff","Type":"ContainerStarted","Data":"88934454c3a3dcb90a696002abb4df5a9f624b6a51eeafb09cc207e40696c842"} Jan 29 09:14:25 crc kubenswrapper[4895]: I0129 09:14:25.579970 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5dzdv/must-gather-t4lwx" event={"ID":"98b775ab-6e2f-42a9-91f7-1952e78a0dff","Type":"ContainerStarted","Data":"2ceedf1250478e3c5ff09b2447078f815039aef773cd3b90129cbfff1fab9053"} Jan 29 09:14:25 crc kubenswrapper[4895]: I0129 09:14:25.601910 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5dzdv/must-gather-t4lwx" podStartSLOduration=2.43280166 podStartE2EDuration="9.601885435s" podCreationTimestamp="2026-01-29 09:14:16 +0000 UTC" firstStartedPulling="2026-01-29 09:14:17.737173146 +0000 UTC m=+1999.378681292" lastFinishedPulling="2026-01-29 09:14:24.906256921 +0000 UTC m=+2006.547765067" observedRunningTime="2026-01-29 09:14:25.600512288 +0000 UTC m=+2007.242020434" watchObservedRunningTime="2026-01-29 09:14:25.601885435 +0000 UTC m=+2007.243393591" Jan 29 09:14:29 crc kubenswrapper[4895]: I0129 09:14:29.322274 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5dzdv/crc-debug-qzgl9"] Jan 29 09:14:29 crc kubenswrapper[4895]: I0129 09:14:29.325822 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/crc-debug-qzgl9" Jan 29 09:14:29 crc kubenswrapper[4895]: I0129 09:14:29.413473 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/766a6be8-f709-49b0-ae55-6de97d961fd7-host\") pod \"crc-debug-qzgl9\" (UID: \"766a6be8-f709-49b0-ae55-6de97d961fd7\") " pod="openshift-must-gather-5dzdv/crc-debug-qzgl9" Jan 29 09:14:29 crc kubenswrapper[4895]: I0129 09:14:29.413564 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fws2z\" (UniqueName: \"kubernetes.io/projected/766a6be8-f709-49b0-ae55-6de97d961fd7-kube-api-access-fws2z\") pod \"crc-debug-qzgl9\" (UID: \"766a6be8-f709-49b0-ae55-6de97d961fd7\") " pod="openshift-must-gather-5dzdv/crc-debug-qzgl9" Jan 29 09:14:29 crc kubenswrapper[4895]: I0129 09:14:29.515410 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/766a6be8-f709-49b0-ae55-6de97d961fd7-host\") pod \"crc-debug-qzgl9\" (UID: \"766a6be8-f709-49b0-ae55-6de97d961fd7\") " pod="openshift-must-gather-5dzdv/crc-debug-qzgl9" Jan 29 09:14:29 crc kubenswrapper[4895]: I0129 09:14:29.515750 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fws2z\" (UniqueName: \"kubernetes.io/projected/766a6be8-f709-49b0-ae55-6de97d961fd7-kube-api-access-fws2z\") pod \"crc-debug-qzgl9\" (UID: \"766a6be8-f709-49b0-ae55-6de97d961fd7\") " pod="openshift-must-gather-5dzdv/crc-debug-qzgl9" Jan 29 09:14:29 crc kubenswrapper[4895]: I0129 09:14:29.515661 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/766a6be8-f709-49b0-ae55-6de97d961fd7-host\") pod \"crc-debug-qzgl9\" (UID: \"766a6be8-f709-49b0-ae55-6de97d961fd7\") " pod="openshift-must-gather-5dzdv/crc-debug-qzgl9" Jan 29 09:14:29 crc kubenswrapper[4895]: I0129 09:14:29.549369 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fws2z\" (UniqueName: \"kubernetes.io/projected/766a6be8-f709-49b0-ae55-6de97d961fd7-kube-api-access-fws2z\") pod \"crc-debug-qzgl9\" (UID: \"766a6be8-f709-49b0-ae55-6de97d961fd7\") " pod="openshift-must-gather-5dzdv/crc-debug-qzgl9" Jan 29 09:14:29 crc kubenswrapper[4895]: I0129 09:14:29.647519 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/crc-debug-qzgl9" Jan 29 09:14:29 crc kubenswrapper[4895]: W0129 09:14:29.693368 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod766a6be8_f709_49b0_ae55_6de97d961fd7.slice/crio-0fee785bed44d3f3cc2ed1c401554ed8fc68a4a04e30c57a5d7da3d778550dbd WatchSource:0}: Error finding container 0fee785bed44d3f3cc2ed1c401554ed8fc68a4a04e30c57a5d7da3d778550dbd: Status 404 returned error can't find the container with id 0fee785bed44d3f3cc2ed1c401554ed8fc68a4a04e30c57a5d7da3d778550dbd Jan 29 09:14:30 crc kubenswrapper[4895]: I0129 09:14:30.630597 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5dzdv/crc-debug-qzgl9" event={"ID":"766a6be8-f709-49b0-ae55-6de97d961fd7","Type":"ContainerStarted","Data":"0fee785bed44d3f3cc2ed1c401554ed8fc68a4a04e30c57a5d7da3d778550dbd"} Jan 29 09:14:31 crc kubenswrapper[4895]: I0129 09:14:31.095769 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-6khzt"] Jan 29 09:14:31 crc kubenswrapper[4895]: I0129 09:14:31.108607 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-6khzt"] Jan 29 09:14:31 crc kubenswrapper[4895]: I0129 09:14:31.238177 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f95b2d12-bd09-497c-84a2-b145f94a4818" path="/var/lib/kubelet/pods/f95b2d12-bd09-497c-84a2-b145f94a4818/volumes" Jan 29 09:14:34 crc kubenswrapper[4895]: I0129 09:14:34.032904 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jx5qb"] Jan 29 09:14:34 crc kubenswrapper[4895]: I0129 09:14:34.053572 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jx5qb"] Jan 29 09:14:35 crc kubenswrapper[4895]: I0129 09:14:35.230444 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e58f4f0b-0a2b-4f02-a61c-903e35516ce6" path="/var/lib/kubelet/pods/e58f4f0b-0a2b-4f02-a61c-903e35516ce6/volumes" Jan 29 09:14:44 crc kubenswrapper[4895]: I0129 09:14:44.844781 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5dzdv/crc-debug-qzgl9" event={"ID":"766a6be8-f709-49b0-ae55-6de97d961fd7","Type":"ContainerStarted","Data":"4965c08c364131288b31f894482c47883ffd193d08f6569c6e49cdb6e5709ac1"} Jan 29 09:14:44 crc kubenswrapper[4895]: I0129 09:14:44.863709 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5dzdv/crc-debug-qzgl9" podStartSLOduration=1.1947296920000001 podStartE2EDuration="15.863683765s" podCreationTimestamp="2026-01-29 09:14:29 +0000 UTC" firstStartedPulling="2026-01-29 09:14:29.696299947 +0000 UTC m=+2011.337808093" lastFinishedPulling="2026-01-29 09:14:44.36525402 +0000 UTC m=+2026.006762166" observedRunningTime="2026-01-29 09:14:44.863263264 +0000 UTC m=+2026.504771400" watchObservedRunningTime="2026-01-29 09:14:44.863683765 +0000 UTC m=+2026.505191911" Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.169476 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt"] Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.172254 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.176419 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.176816 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.190986 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt"] Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.297246 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg5m5\" (UniqueName: \"kubernetes.io/projected/46cab2dd-221d-44ce-9064-9d02b5c5e726-kube-api-access-cg5m5\") pod \"collect-profiles-29494635-x55jt\" (UID: \"46cab2dd-221d-44ce-9064-9d02b5c5e726\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.297432 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46cab2dd-221d-44ce-9064-9d02b5c5e726-secret-volume\") pod \"collect-profiles-29494635-x55jt\" (UID: \"46cab2dd-221d-44ce-9064-9d02b5c5e726\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.297526 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46cab2dd-221d-44ce-9064-9d02b5c5e726-config-volume\") pod \"collect-profiles-29494635-x55jt\" (UID: \"46cab2dd-221d-44ce-9064-9d02b5c5e726\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.399318 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46cab2dd-221d-44ce-9064-9d02b5c5e726-config-volume\") pod \"collect-profiles-29494635-x55jt\" (UID: \"46cab2dd-221d-44ce-9064-9d02b5c5e726\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.399481 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg5m5\" (UniqueName: \"kubernetes.io/projected/46cab2dd-221d-44ce-9064-9d02b5c5e726-kube-api-access-cg5m5\") pod \"collect-profiles-29494635-x55jt\" (UID: \"46cab2dd-221d-44ce-9064-9d02b5c5e726\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.399711 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46cab2dd-221d-44ce-9064-9d02b5c5e726-secret-volume\") pod \"collect-profiles-29494635-x55jt\" (UID: \"46cab2dd-221d-44ce-9064-9d02b5c5e726\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.400440 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46cab2dd-221d-44ce-9064-9d02b5c5e726-config-volume\") pod \"collect-profiles-29494635-x55jt\" (UID: \"46cab2dd-221d-44ce-9064-9d02b5c5e726\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.410764 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46cab2dd-221d-44ce-9064-9d02b5c5e726-secret-volume\") pod \"collect-profiles-29494635-x55jt\" (UID: \"46cab2dd-221d-44ce-9064-9d02b5c5e726\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.424273 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg5m5\" (UniqueName: \"kubernetes.io/projected/46cab2dd-221d-44ce-9064-9d02b5c5e726-kube-api-access-cg5m5\") pod \"collect-profiles-29494635-x55jt\" (UID: \"46cab2dd-221d-44ce-9064-9d02b5c5e726\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" Jan 29 09:15:00 crc kubenswrapper[4895]: I0129 09:15:00.501112 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" Jan 29 09:15:01 crc kubenswrapper[4895]: I0129 09:15:01.037862 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt"] Jan 29 09:15:02 crc kubenswrapper[4895]: I0129 09:15:02.015525 4895 generic.go:334] "Generic (PLEG): container finished" podID="46cab2dd-221d-44ce-9064-9d02b5c5e726" containerID="c0266dc7a942071c62b35bd213dfda61998aa4499edf6c9bedfd07a267f6e5bf" exitCode=0 Jan 29 09:15:02 crc kubenswrapper[4895]: I0129 09:15:02.016280 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" event={"ID":"46cab2dd-221d-44ce-9064-9d02b5c5e726","Type":"ContainerDied","Data":"c0266dc7a942071c62b35bd213dfda61998aa4499edf6c9bedfd07a267f6e5bf"} Jan 29 09:15:02 crc kubenswrapper[4895]: I0129 09:15:02.016322 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" event={"ID":"46cab2dd-221d-44ce-9064-9d02b5c5e726","Type":"ContainerStarted","Data":"049131d47b9fbf462b146d9df0c73b4dc433663232a30f9347593fdb6ce98fa0"} Jan 29 09:15:04 crc kubenswrapper[4895]: I0129 09:15:04.037611 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" event={"ID":"46cab2dd-221d-44ce-9064-9d02b5c5e726","Type":"ContainerDied","Data":"049131d47b9fbf462b146d9df0c73b4dc433663232a30f9347593fdb6ce98fa0"} Jan 29 09:15:04 crc kubenswrapper[4895]: I0129 09:15:04.038368 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="049131d47b9fbf462b146d9df0c73b4dc433663232a30f9347593fdb6ce98fa0" Jan 29 09:15:04 crc kubenswrapper[4895]: I0129 09:15:04.669754 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" Jan 29 09:15:04 crc kubenswrapper[4895]: I0129 09:15:04.833105 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46cab2dd-221d-44ce-9064-9d02b5c5e726-secret-volume\") pod \"46cab2dd-221d-44ce-9064-9d02b5c5e726\" (UID: \"46cab2dd-221d-44ce-9064-9d02b5c5e726\") " Jan 29 09:15:04 crc kubenswrapper[4895]: I0129 09:15:04.833203 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46cab2dd-221d-44ce-9064-9d02b5c5e726-config-volume\") pod \"46cab2dd-221d-44ce-9064-9d02b5c5e726\" (UID: \"46cab2dd-221d-44ce-9064-9d02b5c5e726\") " Jan 29 09:15:04 crc kubenswrapper[4895]: I0129 09:15:04.833246 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg5m5\" (UniqueName: \"kubernetes.io/projected/46cab2dd-221d-44ce-9064-9d02b5c5e726-kube-api-access-cg5m5\") pod \"46cab2dd-221d-44ce-9064-9d02b5c5e726\" (UID: \"46cab2dd-221d-44ce-9064-9d02b5c5e726\") " Jan 29 09:15:04 crc kubenswrapper[4895]: I0129 09:15:04.836101 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46cab2dd-221d-44ce-9064-9d02b5c5e726-config-volume" (OuterVolumeSpecName: "config-volume") pod "46cab2dd-221d-44ce-9064-9d02b5c5e726" (UID: "46cab2dd-221d-44ce-9064-9d02b5c5e726"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:15:04 crc kubenswrapper[4895]: I0129 09:15:04.863512 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46cab2dd-221d-44ce-9064-9d02b5c5e726-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "46cab2dd-221d-44ce-9064-9d02b5c5e726" (UID: "46cab2dd-221d-44ce-9064-9d02b5c5e726"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:15:04 crc kubenswrapper[4895]: I0129 09:15:04.877219 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46cab2dd-221d-44ce-9064-9d02b5c5e726-kube-api-access-cg5m5" (OuterVolumeSpecName: "kube-api-access-cg5m5") pod "46cab2dd-221d-44ce-9064-9d02b5c5e726" (UID: "46cab2dd-221d-44ce-9064-9d02b5c5e726"). InnerVolumeSpecName "kube-api-access-cg5m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:15:04 crc kubenswrapper[4895]: I0129 09:15:04.943134 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46cab2dd-221d-44ce-9064-9d02b5c5e726-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:04 crc kubenswrapper[4895]: I0129 09:15:04.943603 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46cab2dd-221d-44ce-9064-9d02b5c5e726-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:04 crc kubenswrapper[4895]: I0129 09:15:04.943620 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg5m5\" (UniqueName: \"kubernetes.io/projected/46cab2dd-221d-44ce-9064-9d02b5c5e726-kube-api-access-cg5m5\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:05 crc kubenswrapper[4895]: I0129 09:15:05.047182 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-x55jt" Jan 29 09:15:05 crc kubenswrapper[4895]: I0129 09:15:05.765189 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg"] Jan 29 09:15:05 crc kubenswrapper[4895]: I0129 09:15:05.776936 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-2xwwg"] Jan 29 09:15:06 crc kubenswrapper[4895]: I0129 09:15:06.353125 4895 scope.go:117] "RemoveContainer" containerID="fa05d8dda4fde7bc10b7f544d4c1819066a36289672d37d6b23c288161874ea2" Jan 29 09:15:06 crc kubenswrapper[4895]: I0129 09:15:06.424157 4895 scope.go:117] "RemoveContainer" containerID="c88a21c4b4e0ab0cc669142099f90dd90bc52f98af717769ff4991d45c619a28" Jan 29 09:15:07 crc kubenswrapper[4895]: I0129 09:15:07.227204 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d42bddeb-f93a-4603-a38e-1016ca2b3a03" path="/var/lib/kubelet/pods/d42bddeb-f93a-4603-a38e-1016ca2b3a03/volumes" Jan 29 09:15:15 crc kubenswrapper[4895]: I0129 09:15:15.051037 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-wqmlx"] Jan 29 09:15:15 crc kubenswrapper[4895]: I0129 09:15:15.062071 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-wqmlx"] Jan 29 09:15:15 crc kubenswrapper[4895]: I0129 09:15:15.232802 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7867aa6-5213-42fd-b3fd-592a74e6959e" path="/var/lib/kubelet/pods/e7867aa6-5213-42fd-b3fd-592a74e6959e/volumes" Jan 29 09:15:16 crc kubenswrapper[4895]: I0129 09:15:16.020403 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:15:16 crc kubenswrapper[4895]: I0129 09:15:16.020479 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:15:32 crc kubenswrapper[4895]: I0129 09:15:32.347026 4895 generic.go:334] "Generic (PLEG): container finished" podID="766a6be8-f709-49b0-ae55-6de97d961fd7" containerID="4965c08c364131288b31f894482c47883ffd193d08f6569c6e49cdb6e5709ac1" exitCode=0 Jan 29 09:15:32 crc kubenswrapper[4895]: I0129 09:15:32.347500 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5dzdv/crc-debug-qzgl9" event={"ID":"766a6be8-f709-49b0-ae55-6de97d961fd7","Type":"ContainerDied","Data":"4965c08c364131288b31f894482c47883ffd193d08f6569c6e49cdb6e5709ac1"} Jan 29 09:15:33 crc kubenswrapper[4895]: I0129 09:15:33.474460 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/crc-debug-qzgl9" Jan 29 09:15:33 crc kubenswrapper[4895]: I0129 09:15:33.517810 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5dzdv/crc-debug-qzgl9"] Jan 29 09:15:33 crc kubenswrapper[4895]: I0129 09:15:33.528575 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5dzdv/crc-debug-qzgl9"] Jan 29 09:15:33 crc kubenswrapper[4895]: I0129 09:15:33.580434 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/766a6be8-f709-49b0-ae55-6de97d961fd7-host\") pod \"766a6be8-f709-49b0-ae55-6de97d961fd7\" (UID: \"766a6be8-f709-49b0-ae55-6de97d961fd7\") " Jan 29 09:15:33 crc kubenswrapper[4895]: I0129 09:15:33.580605 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fws2z\" (UniqueName: \"kubernetes.io/projected/766a6be8-f709-49b0-ae55-6de97d961fd7-kube-api-access-fws2z\") pod \"766a6be8-f709-49b0-ae55-6de97d961fd7\" (UID: \"766a6be8-f709-49b0-ae55-6de97d961fd7\") " Jan 29 09:15:33 crc kubenswrapper[4895]: I0129 09:15:33.581404 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/766a6be8-f709-49b0-ae55-6de97d961fd7-host" (OuterVolumeSpecName: "host") pod "766a6be8-f709-49b0-ae55-6de97d961fd7" (UID: "766a6be8-f709-49b0-ae55-6de97d961fd7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:15:33 crc kubenswrapper[4895]: I0129 09:15:33.582297 4895 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/766a6be8-f709-49b0-ae55-6de97d961fd7-host\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:33 crc kubenswrapper[4895]: I0129 09:15:33.593362 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766a6be8-f709-49b0-ae55-6de97d961fd7-kube-api-access-fws2z" (OuterVolumeSpecName: "kube-api-access-fws2z") pod "766a6be8-f709-49b0-ae55-6de97d961fd7" (UID: "766a6be8-f709-49b0-ae55-6de97d961fd7"). InnerVolumeSpecName "kube-api-access-fws2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:15:33 crc kubenswrapper[4895]: I0129 09:15:33.684908 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fws2z\" (UniqueName: \"kubernetes.io/projected/766a6be8-f709-49b0-ae55-6de97d961fd7-kube-api-access-fws2z\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:34 crc kubenswrapper[4895]: I0129 09:15:34.367591 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fee785bed44d3f3cc2ed1c401554ed8fc68a4a04e30c57a5d7da3d778550dbd" Jan 29 09:15:34 crc kubenswrapper[4895]: I0129 09:15:34.367716 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/crc-debug-qzgl9" Jan 29 09:15:34 crc kubenswrapper[4895]: I0129 09:15:34.735646 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5dzdv/crc-debug-k9hfz"] Jan 29 09:15:34 crc kubenswrapper[4895]: E0129 09:15:34.736640 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766a6be8-f709-49b0-ae55-6de97d961fd7" containerName="container-00" Jan 29 09:15:34 crc kubenswrapper[4895]: I0129 09:15:34.736658 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="766a6be8-f709-49b0-ae55-6de97d961fd7" containerName="container-00" Jan 29 09:15:34 crc kubenswrapper[4895]: E0129 09:15:34.736705 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46cab2dd-221d-44ce-9064-9d02b5c5e726" containerName="collect-profiles" Jan 29 09:15:34 crc kubenswrapper[4895]: I0129 09:15:34.736714 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="46cab2dd-221d-44ce-9064-9d02b5c5e726" containerName="collect-profiles" Jan 29 09:15:34 crc kubenswrapper[4895]: I0129 09:15:34.736975 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="46cab2dd-221d-44ce-9064-9d02b5c5e726" containerName="collect-profiles" Jan 29 09:15:34 crc kubenswrapper[4895]: I0129 09:15:34.737009 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="766a6be8-f709-49b0-ae55-6de97d961fd7" containerName="container-00" Jan 29 09:15:34 crc kubenswrapper[4895]: I0129 09:15:34.738068 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/crc-debug-k9hfz" Jan 29 09:15:34 crc kubenswrapper[4895]: I0129 09:15:34.744486 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktwr2\" (UniqueName: \"kubernetes.io/projected/464da75d-ee5d-4ab8-b8ae-8a41ce5162e4-kube-api-access-ktwr2\") pod \"crc-debug-k9hfz\" (UID: \"464da75d-ee5d-4ab8-b8ae-8a41ce5162e4\") " pod="openshift-must-gather-5dzdv/crc-debug-k9hfz" Jan 29 09:15:34 crc kubenswrapper[4895]: I0129 09:15:34.744830 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/464da75d-ee5d-4ab8-b8ae-8a41ce5162e4-host\") pod \"crc-debug-k9hfz\" (UID: \"464da75d-ee5d-4ab8-b8ae-8a41ce5162e4\") " pod="openshift-must-gather-5dzdv/crc-debug-k9hfz" Jan 29 09:15:34 crc kubenswrapper[4895]: I0129 09:15:34.848155 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktwr2\" (UniqueName: \"kubernetes.io/projected/464da75d-ee5d-4ab8-b8ae-8a41ce5162e4-kube-api-access-ktwr2\") pod \"crc-debug-k9hfz\" (UID: \"464da75d-ee5d-4ab8-b8ae-8a41ce5162e4\") " pod="openshift-must-gather-5dzdv/crc-debug-k9hfz" Jan 29 09:15:34 crc kubenswrapper[4895]: I0129 09:15:34.848709 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/464da75d-ee5d-4ab8-b8ae-8a41ce5162e4-host\") pod \"crc-debug-k9hfz\" (UID: \"464da75d-ee5d-4ab8-b8ae-8a41ce5162e4\") " pod="openshift-must-gather-5dzdv/crc-debug-k9hfz" Jan 29 09:15:34 crc kubenswrapper[4895]: I0129 09:15:34.848875 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/464da75d-ee5d-4ab8-b8ae-8a41ce5162e4-host\") pod \"crc-debug-k9hfz\" (UID: \"464da75d-ee5d-4ab8-b8ae-8a41ce5162e4\") " pod="openshift-must-gather-5dzdv/crc-debug-k9hfz" Jan 29 09:15:34 crc kubenswrapper[4895]: I0129 09:15:34.882026 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktwr2\" (UniqueName: \"kubernetes.io/projected/464da75d-ee5d-4ab8-b8ae-8a41ce5162e4-kube-api-access-ktwr2\") pod \"crc-debug-k9hfz\" (UID: \"464da75d-ee5d-4ab8-b8ae-8a41ce5162e4\") " pod="openshift-must-gather-5dzdv/crc-debug-k9hfz" Jan 29 09:15:35 crc kubenswrapper[4895]: I0129 09:15:35.081476 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/crc-debug-k9hfz" Jan 29 09:15:35 crc kubenswrapper[4895]: I0129 09:15:35.239605 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="766a6be8-f709-49b0-ae55-6de97d961fd7" path="/var/lib/kubelet/pods/766a6be8-f709-49b0-ae55-6de97d961fd7/volumes" Jan 29 09:15:35 crc kubenswrapper[4895]: I0129 09:15:35.384035 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5dzdv/crc-debug-k9hfz" event={"ID":"464da75d-ee5d-4ab8-b8ae-8a41ce5162e4","Type":"ContainerStarted","Data":"3265e5d6d0198bc7113ef1ae7187462f962104c67f7ac84e152dce24e5b20602"} Jan 29 09:15:35 crc kubenswrapper[4895]: E0129 09:15:35.634281 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod464da75d_ee5d_4ab8_b8ae_8a41ce5162e4.slice/crio-484b1e6cead5cb5a0fd39e2907d329d61881bee3da91de88df35d651be5cc44a.scope\": RecentStats: unable to find data in memory cache]" Jan 29 09:15:36 crc kubenswrapper[4895]: I0129 09:15:36.396244 4895 generic.go:334] "Generic (PLEG): container finished" podID="464da75d-ee5d-4ab8-b8ae-8a41ce5162e4" containerID="484b1e6cead5cb5a0fd39e2907d329d61881bee3da91de88df35d651be5cc44a" exitCode=0 Jan 29 09:15:36 crc kubenswrapper[4895]: I0129 09:15:36.396348 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5dzdv/crc-debug-k9hfz" event={"ID":"464da75d-ee5d-4ab8-b8ae-8a41ce5162e4","Type":"ContainerDied","Data":"484b1e6cead5cb5a0fd39e2907d329d61881bee3da91de88df35d651be5cc44a"} Jan 29 09:15:36 crc kubenswrapper[4895]: I0129 09:15:36.957893 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5dzdv/crc-debug-k9hfz"] Jan 29 09:15:36 crc kubenswrapper[4895]: I0129 09:15:36.965377 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5dzdv/crc-debug-k9hfz"] Jan 29 09:15:37 crc kubenswrapper[4895]: I0129 09:15:37.518006 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/crc-debug-k9hfz" Jan 29 09:15:37 crc kubenswrapper[4895]: I0129 09:15:37.712611 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/464da75d-ee5d-4ab8-b8ae-8a41ce5162e4-host\") pod \"464da75d-ee5d-4ab8-b8ae-8a41ce5162e4\" (UID: \"464da75d-ee5d-4ab8-b8ae-8a41ce5162e4\") " Jan 29 09:15:37 crc kubenswrapper[4895]: I0129 09:15:37.712810 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/464da75d-ee5d-4ab8-b8ae-8a41ce5162e4-host" (OuterVolumeSpecName: "host") pod "464da75d-ee5d-4ab8-b8ae-8a41ce5162e4" (UID: "464da75d-ee5d-4ab8-b8ae-8a41ce5162e4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:15:37 crc kubenswrapper[4895]: I0129 09:15:37.713227 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktwr2\" (UniqueName: \"kubernetes.io/projected/464da75d-ee5d-4ab8-b8ae-8a41ce5162e4-kube-api-access-ktwr2\") pod \"464da75d-ee5d-4ab8-b8ae-8a41ce5162e4\" (UID: \"464da75d-ee5d-4ab8-b8ae-8a41ce5162e4\") " Jan 29 09:15:37 crc kubenswrapper[4895]: I0129 09:15:37.714753 4895 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/464da75d-ee5d-4ab8-b8ae-8a41ce5162e4-host\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:37 crc kubenswrapper[4895]: I0129 09:15:37.739895 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464da75d-ee5d-4ab8-b8ae-8a41ce5162e4-kube-api-access-ktwr2" (OuterVolumeSpecName: "kube-api-access-ktwr2") pod "464da75d-ee5d-4ab8-b8ae-8a41ce5162e4" (UID: "464da75d-ee5d-4ab8-b8ae-8a41ce5162e4"). InnerVolumeSpecName "kube-api-access-ktwr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:15:37 crc kubenswrapper[4895]: I0129 09:15:37.817229 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktwr2\" (UniqueName: \"kubernetes.io/projected/464da75d-ee5d-4ab8-b8ae-8a41ce5162e4-kube-api-access-ktwr2\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:38 crc kubenswrapper[4895]: I0129 09:15:38.158400 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5dzdv/crc-debug-tn7wg"] Jan 29 09:15:38 crc kubenswrapper[4895]: E0129 09:15:38.158944 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464da75d-ee5d-4ab8-b8ae-8a41ce5162e4" containerName="container-00" Jan 29 09:15:38 crc kubenswrapper[4895]: I0129 09:15:38.158968 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="464da75d-ee5d-4ab8-b8ae-8a41ce5162e4" containerName="container-00" Jan 29 09:15:38 crc kubenswrapper[4895]: I0129 09:15:38.159244 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="464da75d-ee5d-4ab8-b8ae-8a41ce5162e4" containerName="container-00" Jan 29 09:15:38 crc kubenswrapper[4895]: I0129 09:15:38.160737 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/crc-debug-tn7wg" Jan 29 09:15:38 crc kubenswrapper[4895]: I0129 09:15:38.328996 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th96x\" (UniqueName: \"kubernetes.io/projected/3c6b5992-595d-4753-8cf6-0a9f2e1e438b-kube-api-access-th96x\") pod \"crc-debug-tn7wg\" (UID: \"3c6b5992-595d-4753-8cf6-0a9f2e1e438b\") " pod="openshift-must-gather-5dzdv/crc-debug-tn7wg" Jan 29 09:15:38 crc kubenswrapper[4895]: I0129 09:15:38.329072 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3c6b5992-595d-4753-8cf6-0a9f2e1e438b-host\") pod \"crc-debug-tn7wg\" (UID: \"3c6b5992-595d-4753-8cf6-0a9f2e1e438b\") " pod="openshift-must-gather-5dzdv/crc-debug-tn7wg" Jan 29 09:15:38 crc kubenswrapper[4895]: I0129 09:15:38.420691 4895 scope.go:117] "RemoveContainer" containerID="484b1e6cead5cb5a0fd39e2907d329d61881bee3da91de88df35d651be5cc44a" Jan 29 09:15:38 crc kubenswrapper[4895]: I0129 09:15:38.420760 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/crc-debug-k9hfz" Jan 29 09:15:38 crc kubenswrapper[4895]: I0129 09:15:38.432502 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th96x\" (UniqueName: \"kubernetes.io/projected/3c6b5992-595d-4753-8cf6-0a9f2e1e438b-kube-api-access-th96x\") pod \"crc-debug-tn7wg\" (UID: \"3c6b5992-595d-4753-8cf6-0a9f2e1e438b\") " pod="openshift-must-gather-5dzdv/crc-debug-tn7wg" Jan 29 09:15:38 crc kubenswrapper[4895]: I0129 09:15:38.432576 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3c6b5992-595d-4753-8cf6-0a9f2e1e438b-host\") pod \"crc-debug-tn7wg\" (UID: \"3c6b5992-595d-4753-8cf6-0a9f2e1e438b\") " pod="openshift-must-gather-5dzdv/crc-debug-tn7wg" Jan 29 09:15:38 crc kubenswrapper[4895]: I0129 09:15:38.432776 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3c6b5992-595d-4753-8cf6-0a9f2e1e438b-host\") pod \"crc-debug-tn7wg\" (UID: \"3c6b5992-595d-4753-8cf6-0a9f2e1e438b\") " pod="openshift-must-gather-5dzdv/crc-debug-tn7wg" Jan 29 09:15:38 crc kubenswrapper[4895]: I0129 09:15:38.454141 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th96x\" (UniqueName: \"kubernetes.io/projected/3c6b5992-595d-4753-8cf6-0a9f2e1e438b-kube-api-access-th96x\") pod \"crc-debug-tn7wg\" (UID: \"3c6b5992-595d-4753-8cf6-0a9f2e1e438b\") " pod="openshift-must-gather-5dzdv/crc-debug-tn7wg" Jan 29 09:15:38 crc kubenswrapper[4895]: I0129 09:15:38.482229 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/crc-debug-tn7wg" Jan 29 09:15:38 crc kubenswrapper[4895]: W0129 09:15:38.515543 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c6b5992_595d_4753_8cf6_0a9f2e1e438b.slice/crio-7564c1f996c1759993b42577bd12594bbfd9bdd905a8f0b472cecbddf5f63880 WatchSource:0}: Error finding container 7564c1f996c1759993b42577bd12594bbfd9bdd905a8f0b472cecbddf5f63880: Status 404 returned error can't find the container with id 7564c1f996c1759993b42577bd12594bbfd9bdd905a8f0b472cecbddf5f63880 Jan 29 09:15:39 crc kubenswrapper[4895]: I0129 09:15:39.226403 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="464da75d-ee5d-4ab8-b8ae-8a41ce5162e4" path="/var/lib/kubelet/pods/464da75d-ee5d-4ab8-b8ae-8a41ce5162e4/volumes" Jan 29 09:15:39 crc kubenswrapper[4895]: I0129 09:15:39.438100 4895 generic.go:334] "Generic (PLEG): container finished" podID="3c6b5992-595d-4753-8cf6-0a9f2e1e438b" containerID="d81f89e8e216c89f634b3528db24a573fd5f6dc96c85ac415eb7c12aeab073b7" exitCode=0 Jan 29 09:15:39 crc kubenswrapper[4895]: I0129 09:15:39.438198 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5dzdv/crc-debug-tn7wg" event={"ID":"3c6b5992-595d-4753-8cf6-0a9f2e1e438b","Type":"ContainerDied","Data":"d81f89e8e216c89f634b3528db24a573fd5f6dc96c85ac415eb7c12aeab073b7"} Jan 29 09:15:39 crc kubenswrapper[4895]: I0129 09:15:39.438279 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5dzdv/crc-debug-tn7wg" event={"ID":"3c6b5992-595d-4753-8cf6-0a9f2e1e438b","Type":"ContainerStarted","Data":"7564c1f996c1759993b42577bd12594bbfd9bdd905a8f0b472cecbddf5f63880"} Jan 29 09:15:39 crc kubenswrapper[4895]: I0129 09:15:39.494848 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5dzdv/crc-debug-tn7wg"] Jan 29 09:15:39 crc kubenswrapper[4895]: I0129 09:15:39.503271 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5dzdv/crc-debug-tn7wg"] Jan 29 09:15:40 crc kubenswrapper[4895]: I0129 09:15:40.573425 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/crc-debug-tn7wg" Jan 29 09:15:40 crc kubenswrapper[4895]: I0129 09:15:40.683777 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th96x\" (UniqueName: \"kubernetes.io/projected/3c6b5992-595d-4753-8cf6-0a9f2e1e438b-kube-api-access-th96x\") pod \"3c6b5992-595d-4753-8cf6-0a9f2e1e438b\" (UID: \"3c6b5992-595d-4753-8cf6-0a9f2e1e438b\") " Jan 29 09:15:40 crc kubenswrapper[4895]: I0129 09:15:40.685448 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3c6b5992-595d-4753-8cf6-0a9f2e1e438b-host\") pod \"3c6b5992-595d-4753-8cf6-0a9f2e1e438b\" (UID: \"3c6b5992-595d-4753-8cf6-0a9f2e1e438b\") " Jan 29 09:15:40 crc kubenswrapper[4895]: I0129 09:15:40.685627 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c6b5992-595d-4753-8cf6-0a9f2e1e438b-host" (OuterVolumeSpecName: "host") pod "3c6b5992-595d-4753-8cf6-0a9f2e1e438b" (UID: "3c6b5992-595d-4753-8cf6-0a9f2e1e438b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:15:40 crc kubenswrapper[4895]: I0129 09:15:40.685979 4895 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3c6b5992-595d-4753-8cf6-0a9f2e1e438b-host\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:40 crc kubenswrapper[4895]: I0129 09:15:40.691593 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6b5992-595d-4753-8cf6-0a9f2e1e438b-kube-api-access-th96x" (OuterVolumeSpecName: "kube-api-access-th96x") pod "3c6b5992-595d-4753-8cf6-0a9f2e1e438b" (UID: "3c6b5992-595d-4753-8cf6-0a9f2e1e438b"). InnerVolumeSpecName "kube-api-access-th96x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:15:40 crc kubenswrapper[4895]: I0129 09:15:40.788711 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th96x\" (UniqueName: \"kubernetes.io/projected/3c6b5992-595d-4753-8cf6-0a9f2e1e438b-kube-api-access-th96x\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:41 crc kubenswrapper[4895]: I0129 09:15:41.225514 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6b5992-595d-4753-8cf6-0a9f2e1e438b" path="/var/lib/kubelet/pods/3c6b5992-595d-4753-8cf6-0a9f2e1e438b/volumes" Jan 29 09:15:41 crc kubenswrapper[4895]: I0129 09:15:41.460636 4895 scope.go:117] "RemoveContainer" containerID="d81f89e8e216c89f634b3528db24a573fd5f6dc96c85ac415eb7c12aeab073b7" Jan 29 09:15:41 crc kubenswrapper[4895]: I0129 09:15:41.460720 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/crc-debug-tn7wg" Jan 29 09:15:46 crc kubenswrapper[4895]: I0129 09:15:46.021147 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:15:46 crc kubenswrapper[4895]: I0129 09:15:46.022103 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:15:57 crc kubenswrapper[4895]: I0129 09:15:57.918454 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5799c46566-89j6v_dcb59826-4f95-4127-b7fe-f32cd95cad8e/barbican-api/0.log" Jan 29 09:15:58 crc kubenswrapper[4895]: I0129 09:15:58.133383 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5799c46566-89j6v_dcb59826-4f95-4127-b7fe-f32cd95cad8e/barbican-api-log/0.log" Jan 29 09:15:58 crc kubenswrapper[4895]: I0129 09:15:58.166708 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-bb49b7794-577rp_c1d9162f-7759-46d6-bea9-a9975470a1d9/barbican-keystone-listener/0.log" Jan 29 09:15:58 crc kubenswrapper[4895]: I0129 09:15:58.381205 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-555f79d94f-q55hl_c403270a-6868-4dec-8340-ac3237f9028e/barbican-worker/0.log" Jan 29 09:15:58 crc kubenswrapper[4895]: I0129 09:15:58.403222 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-bb49b7794-577rp_c1d9162f-7759-46d6-bea9-a9975470a1d9/barbican-keystone-listener-log/0.log" Jan 29 09:15:58 crc kubenswrapper[4895]: I0129 09:15:58.482798 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-555f79d94f-q55hl_c403270a-6868-4dec-8340-ac3237f9028e/barbican-worker-log/0.log" Jan 29 09:15:58 crc kubenswrapper[4895]: I0129 09:15:58.699084 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6dced459-73d7-4079-8450-1d22972197c0/ceilometer-central-agent/0.log" Jan 29 09:15:58 crc kubenswrapper[4895]: I0129 09:15:58.776733 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6dced459-73d7-4079-8450-1d22972197c0/proxy-httpd/0.log" Jan 29 09:15:58 crc kubenswrapper[4895]: I0129 09:15:58.840721 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6dced459-73d7-4079-8450-1d22972197c0/ceilometer-notification-agent/0.log" Jan 29 09:15:58 crc kubenswrapper[4895]: I0129 09:15:58.941857 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6dced459-73d7-4079-8450-1d22972197c0/sg-core/0.log" Jan 29 09:15:59 crc kubenswrapper[4895]: I0129 09:15:59.071219 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2e4960bc-f10d-48c0-835d-9616ae852ec8/cinder-api/0.log" Jan 29 09:15:59 crc kubenswrapper[4895]: I0129 09:15:59.101183 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2e4960bc-f10d-48c0-835d-9616ae852ec8/cinder-api-log/0.log" Jan 29 09:15:59 crc kubenswrapper[4895]: I0129 09:15:59.348066 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19/cinder-scheduler/0.log" Jan 29 09:15:59 crc kubenswrapper[4895]: I0129 09:15:59.396524 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19/probe/0.log" Jan 29 09:15:59 crc kubenswrapper[4895]: I0129 09:15:59.581962 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-82zv8_d9878c3e-4959-4f63-bfc3-899f9a55eee2/init/0.log" Jan 29 09:15:59 crc kubenswrapper[4895]: I0129 09:15:59.764743 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-82zv8_d9878c3e-4959-4f63-bfc3-899f9a55eee2/init/0.log" Jan 29 09:15:59 crc kubenswrapper[4895]: I0129 09:15:59.774820 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-82zv8_d9878c3e-4959-4f63-bfc3-899f9a55eee2/dnsmasq-dns/0.log" Jan 29 09:15:59 crc kubenswrapper[4895]: I0129 09:15:59.885012 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cd318cba-9380-4676-bb83-3256c9c5adf5/glance-httpd/0.log" Jan 29 09:16:00 crc kubenswrapper[4895]: I0129 09:16:00.006455 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cd318cba-9380-4676-bb83-3256c9c5adf5/glance-log/0.log" Jan 29 09:16:00 crc kubenswrapper[4895]: I0129 09:16:00.084749 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_dbcc1d5c-0822-492b-98ce-667e0f13d497/glance-httpd/0.log" Jan 29 09:16:00 crc kubenswrapper[4895]: I0129 09:16:00.158858 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_dbcc1d5c-0822-492b-98ce-667e0f13d497/glance-log/0.log" Jan 29 09:16:00 crc kubenswrapper[4895]: I0129 09:16:00.338396 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-7f7db74854-hkzkt_5105e55b-cea6-4b20-bf0a-f7f0410f8aa9/init/0.log" Jan 29 09:16:00 crc kubenswrapper[4895]: I0129 09:16:00.572537 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-7f7db74854-hkzkt_5105e55b-cea6-4b20-bf0a-f7f0410f8aa9/init/0.log" Jan 29 09:16:00 crc kubenswrapper[4895]: I0129 09:16:00.573250 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-7f7db74854-hkzkt_5105e55b-cea6-4b20-bf0a-f7f0410f8aa9/ironic-api-log/0.log" Jan 29 09:16:00 crc kubenswrapper[4895]: I0129 09:16:00.664716 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-7f7db74854-hkzkt_5105e55b-cea6-4b20-bf0a-f7f0410f8aa9/ironic-api/0.log" Jan 29 09:16:00 crc kubenswrapper[4895]: I0129 09:16:00.823170 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/init/0.log" Jan 29 09:16:01 crc kubenswrapper[4895]: I0129 09:16:01.089568 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/ironic-python-agent-init/0.log" Jan 29 09:16:01 crc kubenswrapper[4895]: I0129 09:16:01.099393 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/init/0.log" Jan 29 09:16:01 crc kubenswrapper[4895]: I0129 09:16:01.103654 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/ironic-python-agent-init/0.log" Jan 29 09:16:01 crc kubenswrapper[4895]: I0129 09:16:01.428547 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/init/0.log" Jan 29 09:16:01 crc kubenswrapper[4895]: I0129 09:16:01.487675 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/ironic-python-agent-init/0.log" Jan 29 09:16:01 crc kubenswrapper[4895]: I0129 09:16:01.966797 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/init/0.log" Jan 29 09:16:02 crc kubenswrapper[4895]: I0129 09:16:02.157899 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/pxe-init/0.log" Jan 29 09:16:02 crc kubenswrapper[4895]: I0129 09:16:02.211157 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/ironic-python-agent-init/0.log" Jan 29 09:16:02 crc kubenswrapper[4895]: I0129 09:16:02.487365 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/httpboot/0.log" Jan 29 09:16:02 crc kubenswrapper[4895]: I0129 09:16:02.718444 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/ramdisk-logs/0.log" Jan 29 09:16:02 crc kubenswrapper[4895]: I0129 09:16:02.823494 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/ironic-conductor/0.log" Jan 29 09:16:02 crc kubenswrapper[4895]: I0129 09:16:02.873493 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/pxe-init/0.log" Jan 29 09:16:03 crc kubenswrapper[4895]: I0129 09:16:03.001492 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/pxe-init/0.log" Jan 29 09:16:03 crc kubenswrapper[4895]: I0129 09:16:03.044315 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-4mm74_03042a97-0311-4d0c-9878-380987ec9407/init/0.log" Jan 29 09:16:03 crc kubenswrapper[4895]: I0129 09:16:03.370180 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-4mm74_03042a97-0311-4d0c-9878-380987ec9407/init/0.log" Jan 29 09:16:03 crc kubenswrapper[4895]: I0129 09:16:03.392323 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-4mm74_03042a97-0311-4d0c-9878-380987ec9407/ironic-db-sync/0.log" Jan 29 09:16:03 crc kubenswrapper[4895]: I0129 09:16:03.406896 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/ironic-python-agent-init/0.log" Jan 29 09:16:03 crc kubenswrapper[4895]: I0129 09:16:03.466076 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/pxe-init/0.log" Jan 29 09:16:03 crc kubenswrapper[4895]: I0129 09:16:03.600233 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/ironic-python-agent-init/0.log" Jan 29 09:16:03 crc kubenswrapper[4895]: I0129 09:16:03.634294 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/inspector-pxe-init/0.log" Jan 29 09:16:03 crc kubenswrapper[4895]: I0129 09:16:03.658898 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/inspector-pxe-init/0.log" Jan 29 09:16:03 crc kubenswrapper[4895]: I0129 09:16:03.890961 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/ironic-python-agent-init/0.log" Jan 29 09:16:03 crc kubenswrapper[4895]: I0129 09:16:03.917537 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/inspector-httpboot/0.log" Jan 29 09:16:03 crc kubenswrapper[4895]: I0129 09:16:03.923509 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/ironic-inspector/2.log" Jan 29 09:16:03 crc kubenswrapper[4895]: I0129 09:16:03.937658 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/ironic-inspector/1.log" Jan 29 09:16:03 crc kubenswrapper[4895]: I0129 09:16:03.945835 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/inspector-pxe-init/0.log" Jan 29 09:16:04 crc kubenswrapper[4895]: I0129 09:16:04.142045 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/ironic-inspector-httpd/0.log" Jan 29 09:16:04 crc kubenswrapper[4895]: I0129 09:16:04.159607 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-sync-v7z9b_1318c5c6-26bf-46e6-aba5-ab4e024be588/ironic-inspector-db-sync/0.log" Jan 29 09:16:04 crc kubenswrapper[4895]: I0129 09:16:04.165459 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/ramdisk-logs/0.log" Jan 29 09:16:04 crc kubenswrapper[4895]: I0129 09:16:04.391400 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-78c59f886f-xtrfg_844ab9b8-4b72-401d-b008-db11605452a8/ironic-neutron-agent/1.log" Jan 29 09:16:04 crc kubenswrapper[4895]: I0129 09:16:04.422908 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-78c59f886f-xtrfg_844ab9b8-4b72-401d-b008-db11605452a8/ironic-neutron-agent/2.log" Jan 29 09:16:04 crc kubenswrapper[4895]: I0129 09:16:04.726804 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5f75d78756-glzhf_406e8af5-68c1-48c3-b377-68d3f60c10a9/keystone-api/0.log" Jan 29 09:16:04 crc kubenswrapper[4895]: I0129 09:16:04.830118 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_75059205-4797-4975-98d8-bcbf919748ba/kube-state-metrics/0.log" Jan 29 09:16:05 crc kubenswrapper[4895]: I0129 09:16:05.155565 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6b548b4f8c-kc92t_657c2688-8379-4121-a64a-89c1fd428b57/neutron-api/0.log" Jan 29 09:16:05 crc kubenswrapper[4895]: I0129 09:16:05.167321 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6b548b4f8c-kc92t_657c2688-8379-4121-a64a-89c1fd428b57/neutron-httpd/0.log" Jan 29 09:16:05 crc kubenswrapper[4895]: I0129 09:16:05.567422 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_558cbc7f-9455-49b5-89aa-b898d468ca08/nova-api-api/0.log" Jan 29 09:16:05 crc kubenswrapper[4895]: I0129 09:16:05.623399 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_633d9018-c7c7-420f-9b03-6c983a5c40b4/nova-cell0-conductor-conductor/0.log" Jan 29 09:16:05 crc kubenswrapper[4895]: I0129 09:16:05.631077 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_558cbc7f-9455-49b5-89aa-b898d468ca08/nova-api-log/0.log" Jan 29 09:16:06 crc kubenswrapper[4895]: I0129 09:16:06.030739 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4653a20c-bef0-463a-962d-f1f17b2011e3/nova-cell1-conductor-conductor/0.log" Jan 29 09:16:06 crc kubenswrapper[4895]: I0129 09:16:06.032072 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_212553fe-f689-4d32-9368-e1f5a6a9654d/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 09:16:06 crc kubenswrapper[4895]: I0129 09:16:06.286907 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a2e13290-5cda-49ac-9efd-5e8a72da76b6/nova-metadata-log/0.log" Jan 29 09:16:06 crc kubenswrapper[4895]: I0129 09:16:06.568282 4895 scope.go:117] "RemoveContainer" containerID="e248ecf7514232be0d7ff76c58cbad6dc9f1c2bf4367a5acb67c7c40f4d465d4" Jan 29 09:16:06 crc kubenswrapper[4895]: I0129 09:16:06.590817 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_216aa652-e284-4fb8-90bf-d975cc19d1f0/nova-scheduler-scheduler/0.log" Jan 29 09:16:06 crc kubenswrapper[4895]: I0129 09:16:06.638659 4895 scope.go:117] "RemoveContainer" containerID="5a64b142a76b6c87a3c1406fae2a0cb677f914d8bd286e32ff3923312888e44c" Jan 29 09:16:06 crc kubenswrapper[4895]: I0129 09:16:06.715813 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a2e13290-5cda-49ac-9efd-5e8a72da76b6/nova-metadata-metadata/0.log" Jan 29 09:16:06 crc kubenswrapper[4895]: I0129 09:16:06.777593 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_205e527c-d0a7-4b85-9542-19a871c61693/mysql-bootstrap/0.log" Jan 29 09:16:06 crc kubenswrapper[4895]: I0129 09:16:06.945954 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_205e527c-d0a7-4b85-9542-19a871c61693/mysql-bootstrap/0.log" Jan 29 09:16:07 crc kubenswrapper[4895]: I0129 09:16:07.013502 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_205e527c-d0a7-4b85-9542-19a871c61693/galera/0.log" Jan 29 09:16:07 crc kubenswrapper[4895]: I0129 09:16:07.077616 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94/mysql-bootstrap/0.log" Jan 29 09:16:07 crc kubenswrapper[4895]: I0129 09:16:07.305524 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94/mysql-bootstrap/0.log" Jan 29 09:16:07 crc kubenswrapper[4895]: I0129 09:16:07.386938 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94/galera/0.log" Jan 29 09:16:07 crc kubenswrapper[4895]: I0129 09:16:07.426791 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f10bf685-c7de-4126-afc5-6bd68c3e8845/openstackclient/0.log" Jan 29 09:16:07 crc kubenswrapper[4895]: I0129 09:16:07.607010 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lc26n_50cc7d34-44f8-490c-a18c-2d747721d20a/openstack-network-exporter/0.log" Jan 29 09:16:07 crc kubenswrapper[4895]: I0129 09:16:07.733255 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mjz6w_5f71eedb-46ac-474f-9d1e-d4909a49e05b/ovn-controller/0.log" Jan 29 09:16:07 crc kubenswrapper[4895]: I0129 09:16:07.869031 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rzm2l_b283d44c-d996-450c-9b6c-dea58fe633a7/ovsdb-server-init/0.log" Jan 29 09:16:08 crc kubenswrapper[4895]: I0129 09:16:08.186834 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rzm2l_b283d44c-d996-450c-9b6c-dea58fe633a7/ovsdb-server-init/0.log" Jan 29 09:16:08 crc kubenswrapper[4895]: I0129 09:16:08.187681 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rzm2l_b283d44c-d996-450c-9b6c-dea58fe633a7/ovsdb-server/0.log" Jan 29 09:16:08 crc kubenswrapper[4895]: I0129 09:16:08.190667 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rzm2l_b283d44c-d996-450c-9b6c-dea58fe633a7/ovs-vswitchd/0.log" Jan 29 09:16:08 crc kubenswrapper[4895]: I0129 09:16:08.411451 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d524d5b9-7173-4f57-92f5-bf50a940538b/ovn-northd/0.log" Jan 29 09:16:08 crc kubenswrapper[4895]: I0129 09:16:08.416279 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d524d5b9-7173-4f57-92f5-bf50a940538b/openstack-network-exporter/0.log" Jan 29 09:16:08 crc kubenswrapper[4895]: I0129 09:16:08.554930 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_877924c3-f4b2-4040-8b6c-bbc80d6d58af/openstack-network-exporter/0.log" Jan 29 09:16:08 crc kubenswrapper[4895]: I0129 09:16:08.686375 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_877924c3-f4b2-4040-8b6c-bbc80d6d58af/ovsdbserver-nb/0.log" Jan 29 09:16:08 crc kubenswrapper[4895]: I0129 09:16:08.815115 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_250930c1-98a4-4b5d-a0d7-0ba3063bc098/ovsdbserver-sb/0.log" Jan 29 09:16:08 crc kubenswrapper[4895]: I0129 09:16:08.826678 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_250930c1-98a4-4b5d-a0d7-0ba3063bc098/openstack-network-exporter/0.log" Jan 29 09:16:09 crc kubenswrapper[4895]: I0129 09:16:09.096254 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-796fb887fb-dd2s5_2e7b9632-7a45-48f5-8887-4c79543170fd/placement-api/0.log" Jan 29 09:16:09 crc kubenswrapper[4895]: I0129 09:16:09.106641 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-796fb887fb-dd2s5_2e7b9632-7a45-48f5-8887-4c79543170fd/placement-log/0.log" Jan 29 09:16:09 crc kubenswrapper[4895]: I0129 09:16:09.376778 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b1ce25b0-0fc4-4560-88ba-ee5261d106e9/setup-container/0.log" Jan 29 09:16:09 crc kubenswrapper[4895]: I0129 09:16:09.593874 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b1ce25b0-0fc4-4560-88ba-ee5261d106e9/rabbitmq/0.log" Jan 29 09:16:09 crc kubenswrapper[4895]: I0129 09:16:09.610848 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b1ce25b0-0fc4-4560-88ba-ee5261d106e9/setup-container/0.log" Jan 29 09:16:09 crc kubenswrapper[4895]: I0129 09:16:09.670390 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fb202ed2-1680-4411-83d3-4dcfdc317ac9/setup-container/0.log" Jan 29 09:16:09 crc kubenswrapper[4895]: I0129 09:16:09.952030 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fb202ed2-1680-4411-83d3-4dcfdc317ac9/setup-container/0.log" Jan 29 09:16:09 crc kubenswrapper[4895]: I0129 09:16:09.980412 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fb202ed2-1680-4411-83d3-4dcfdc317ac9/rabbitmq/0.log" Jan 29 09:16:10 crc kubenswrapper[4895]: I0129 09:16:10.024064 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5fb7b47b77-cq2p9_cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687/proxy-httpd/0.log" Jan 29 09:16:10 crc kubenswrapper[4895]: I0129 09:16:10.277154 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5fb7b47b77-cq2p9_cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687/proxy-server/0.log" Jan 29 09:16:10 crc kubenswrapper[4895]: I0129 09:16:10.284997 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-drdk8_073f4b22-319f-4cbb-ac96-c0a18da477a6/swift-ring-rebalance/0.log" Jan 29 09:16:10 crc kubenswrapper[4895]: I0129 09:16:10.819236 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/account-server/0.log" Jan 29 09:16:10 crc kubenswrapper[4895]: I0129 09:16:10.823847 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/account-replicator/0.log" Jan 29 09:16:10 crc kubenswrapper[4895]: I0129 09:16:10.860104 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/account-auditor/0.log" Jan 29 09:16:10 crc kubenswrapper[4895]: I0129 09:16:10.881630 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/account-reaper/0.log" Jan 29 09:16:11 crc kubenswrapper[4895]: I0129 09:16:11.035571 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/container-auditor/0.log" Jan 29 09:16:11 crc kubenswrapper[4895]: I0129 09:16:11.103637 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/container-replicator/0.log" Jan 29 09:16:11 crc kubenswrapper[4895]: I0129 09:16:11.135847 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/container-server/0.log" Jan 29 09:16:11 crc kubenswrapper[4895]: I0129 09:16:11.208791 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/container-updater/0.log" Jan 29 09:16:11 crc kubenswrapper[4895]: I0129 09:16:11.338013 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/object-expirer/0.log" Jan 29 09:16:11 crc kubenswrapper[4895]: I0129 09:16:11.352503 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/object-auditor/0.log" Jan 29 09:16:11 crc kubenswrapper[4895]: I0129 09:16:11.429809 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/object-replicator/0.log" Jan 29 09:16:11 crc kubenswrapper[4895]: I0129 09:16:11.469987 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/object-server/0.log" Jan 29 09:16:11 crc kubenswrapper[4895]: I0129 09:16:11.588849 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/object-updater/0.log" Jan 29 09:16:11 crc kubenswrapper[4895]: I0129 09:16:11.645109 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/rsync/0.log" Jan 29 09:16:11 crc kubenswrapper[4895]: I0129 09:16:11.742143 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/swift-recon-cron/0.log" Jan 29 09:16:14 crc kubenswrapper[4895]: I0129 09:16:14.113339 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d720a04a-6de4-4dd9-b918-471d3d69de73/memcached/0.log" Jan 29 09:16:16 crc kubenswrapper[4895]: I0129 09:16:16.020487 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:16:16 crc kubenswrapper[4895]: I0129 09:16:16.020994 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:16:16 crc kubenswrapper[4895]: I0129 09:16:16.021061 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 09:16:16 crc kubenswrapper[4895]: I0129 09:16:16.022098 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ebd8df48bf0fb45ec83588a7131b73c01eddb922c6408acfd935865f5db90c6"} pod="openshift-machine-config-operator/machine-config-daemon-z82hk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:16:16 crc kubenswrapper[4895]: I0129 09:16:16.022161 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" containerID="cri-o://4ebd8df48bf0fb45ec83588a7131b73c01eddb922c6408acfd935865f5db90c6" gracePeriod=600 Jan 29 09:16:16 crc kubenswrapper[4895]: I0129 09:16:16.932415 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerID="4ebd8df48bf0fb45ec83588a7131b73c01eddb922c6408acfd935865f5db90c6" exitCode=0 Jan 29 09:16:16 crc kubenswrapper[4895]: I0129 09:16:16.933114 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerDied","Data":"4ebd8df48bf0fb45ec83588a7131b73c01eddb922c6408acfd935865f5db90c6"} Jan 29 09:16:16 crc kubenswrapper[4895]: I0129 09:16:16.933157 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerStarted","Data":"79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc"} Jan 29 09:16:16 crc kubenswrapper[4895]: I0129 09:16:16.933176 4895 scope.go:117] "RemoveContainer" containerID="cbb9d128ed7bcb6f733cedc1767f3c7adfb06ac45b08c28e2c83f7d9afeeb9b6" Jan 29 09:16:40 crc kubenswrapper[4895]: I0129 09:16:40.207926 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-gd75d_bc16fc79-c074-4969-af29-c46fdd06f9f8/manager/0.log" Jan 29 09:16:40 crc kubenswrapper[4895]: I0129 09:16:40.381374 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8_f1518b1d-569a-475c-ac03-5ccf624c3a36/util/0.log" Jan 29 09:16:40 crc kubenswrapper[4895]: I0129 09:16:40.661312 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8_f1518b1d-569a-475c-ac03-5ccf624c3a36/util/0.log" Jan 29 09:16:40 crc kubenswrapper[4895]: I0129 09:16:40.678599 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8_f1518b1d-569a-475c-ac03-5ccf624c3a36/pull/0.log" Jan 29 09:16:40 crc kubenswrapper[4895]: I0129 09:16:40.697474 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8_f1518b1d-569a-475c-ac03-5ccf624c3a36/pull/0.log" Jan 29 09:16:40 crc kubenswrapper[4895]: I0129 09:16:40.934220 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8_f1518b1d-569a-475c-ac03-5ccf624c3a36/extract/0.log" Jan 29 09:16:40 crc kubenswrapper[4895]: I0129 09:16:40.937372 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8_f1518b1d-569a-475c-ac03-5ccf624c3a36/util/0.log" Jan 29 09:16:41 crc kubenswrapper[4895]: I0129 09:16:41.024200 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8_f1518b1d-569a-475c-ac03-5ccf624c3a36/pull/0.log" Jan 29 09:16:41 crc kubenswrapper[4895]: I0129 09:16:41.264276 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-6cz2h_d4d2a9b0-6258-4257-9824-74abbbc40b24/manager/0.log" Jan 29 09:16:41 crc kubenswrapper[4895]: I0129 09:16:41.327185 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-58zzj_b2dd46da-1ebf-489f-8467-eab7fc206736/manager/0.log" Jan 29 09:16:41 crc kubenswrapper[4895]: I0129 09:16:41.734537 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-7hp5l_e97a1d25-e9ba-4ce2-b172-035afb18721b/manager/0.log" Jan 29 09:16:42 crc kubenswrapper[4895]: I0129 09:16:42.021364 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-sdkzk_5e73fff0-3497-4937-bfe0-10bea87ddeb3/manager/0.log" Jan 29 09:16:42 crc kubenswrapper[4895]: I0129 09:16:42.124100 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-9dpss_ad4a2a80-d64f-45b0-bea8-dc2e5c7ea050/manager/0.log" Jan 29 09:16:42 crc kubenswrapper[4895]: I0129 09:16:42.519699 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54c4948594-l45qb_baa89b4d-cf32-498b-a624-585afea7f964/manager/0.log" Jan 29 09:16:42 crc kubenswrapper[4895]: I0129 09:16:42.529502 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-tptkw_cbca22f6-6189-4f59-b9bd-832466c437d1/manager/0.log" Jan 29 09:16:42 crc kubenswrapper[4895]: I0129 09:16:42.658777 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-8t4nd_348e067e-1b54-43e2-9c01-bf430f7a3630/manager/0.log" Jan 29 09:16:42 crc kubenswrapper[4895]: I0129 09:16:42.791314 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-pq8r4_358815d3-7542-429d-bfa0-742e75ada2f6/manager/0.log" Jan 29 09:16:43 crc kubenswrapper[4895]: I0129 09:16:43.035070 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-dg5kf_bb23ce65-61d9-4868-8008-7582ded2bff2/manager/0.log" Jan 29 09:16:43 crc kubenswrapper[4895]: I0129 09:16:43.141903 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-zbdxv_c57b39e7-275d-4ef2-af51-3e0b014182ee/manager/0.log" Jan 29 09:16:43 crc kubenswrapper[4895]: I0129 09:16:43.391127 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-zpdkh_f7276bca-f319-46bf-a1b4-92a6aec8e6e6/manager/0.log" Jan 29 09:16:43 crc kubenswrapper[4895]: I0129 09:16:43.552052 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-qz9c2_001b758d-81ef-40e5-b53a-7c264915580d/manager/0.log" Jan 29 09:16:43 crc kubenswrapper[4895]: I0129 09:16:43.786909 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d_d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8/manager/0.log" Jan 29 09:16:44 crc kubenswrapper[4895]: I0129 09:16:44.013150 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-777976898d-2mx8n_5567d75e-d4d1-4f59-a79b-b185eaadd750/operator/0.log" Jan 29 09:16:44 crc kubenswrapper[4895]: I0129 09:16:44.232826 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-56sqg_a833ad23-634a-4270-a6aa-267480e7bb2a/registry-server/0.log" Jan 29 09:16:44 crc kubenswrapper[4895]: I0129 09:16:44.484290 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-mj7xz_6bf40523-2804-408c-b50d-cb04bf5b32fc/manager/0.log" Jan 29 09:16:44 crc kubenswrapper[4895]: I0129 09:16:44.715783 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-569b5dc57f-cn6fr_22d12b29-fd4e-4aa2-9081-a79a3a539dab/manager/0.log" Jan 29 09:16:44 crc kubenswrapper[4895]: I0129 09:16:44.759262 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-mnp2h_bf9282d5-a557-4321-b05d-35552e124429/manager/0.log" Jan 29 09:16:44 crc kubenswrapper[4895]: I0129 09:16:44.827780 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fh6n2_1c9af700-ef2b-4d02-a76f-77d31d981a5f/operator/0.log" Jan 29 09:16:45 crc kubenswrapper[4895]: I0129 09:16:45.081187 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-pntdq_c268affd-83d0-4313-a5ba-ee20846ad416/manager/0.log" Jan 29 09:16:45 crc kubenswrapper[4895]: I0129 09:16:45.159071 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-fczp5_853077df-3183-4811-8554-5940dc41912e/manager/0.log" Jan 29 09:16:45 crc kubenswrapper[4895]: I0129 09:16:45.379683 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-4zrlz_7520cf55-cb4a-4598-80d9-499ab60f5ff1/manager/0.log" Jan 29 09:16:45 crc kubenswrapper[4895]: I0129 09:16:45.403562 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-gxq7x_7573c3c1-4b9d-4175-beef-8a4d0c604b6a/manager/0.log" Jan 29 09:17:10 crc kubenswrapper[4895]: I0129 09:17:10.997953 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-f5cn6_13c23359-7d69-4f3c-b89a-a25bee602474/control-plane-machine-set-operator/0.log" Jan 29 09:17:11 crc kubenswrapper[4895]: I0129 09:17:11.290865 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xc6q5_5203d54b-a735-4118-bae0-7554299a98cf/kube-rbac-proxy/0.log" Jan 29 09:17:11 crc kubenswrapper[4895]: I0129 09:17:11.310795 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xc6q5_5203d54b-a735-4118-bae0-7554299a98cf/machine-api-operator/0.log" Jan 29 09:17:27 crc kubenswrapper[4895]: I0129 09:17:27.984453 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-lmpk5_6b07c0c4-eb39-4313-b842-9a36bd400bae/cert-manager-controller/0.log" Jan 29 09:17:28 crc kubenswrapper[4895]: I0129 09:17:28.133312 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rwbcg_754acefa-2366-4c3a-97be-e4a941d8066b/cert-manager-cainjector/0.log" Jan 29 09:17:28 crc kubenswrapper[4895]: I0129 09:17:28.231927 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-ttlgx_0e41817c-460a-4a92-9220-10fde5db690b/cert-manager-webhook/0.log" Jan 29 09:17:43 crc kubenswrapper[4895]: I0129 09:17:43.946451 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-lt6cs_e5b25585-8953-42bb-a128-13272bda1f87/nmstate-console-plugin/0.log" Jan 29 09:17:44 crc kubenswrapper[4895]: I0129 09:17:44.196126 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-62g2t_2a149626-5a36-418c-b7a2-87ff50e92c34/nmstate-handler/0.log" Jan 29 09:17:44 crc kubenswrapper[4895]: I0129 09:17:44.271279 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-qg2h4_2ce6529a-8832-46df-b211-7d7f2388214b/kube-rbac-proxy/0.log" Jan 29 09:17:44 crc kubenswrapper[4895]: I0129 09:17:44.371956 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-qg2h4_2ce6529a-8832-46df-b211-7d7f2388214b/nmstate-metrics/0.log" Jan 29 09:17:44 crc kubenswrapper[4895]: I0129 09:17:44.406620 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-nbgdz_686e1923-3a25-460b-b2f1-636cd6039ffe/nmstate-operator/0.log" Jan 29 09:17:44 crc kubenswrapper[4895]: I0129 09:17:44.981906 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-mgwfl_df364a5d-82b0-43f6-9e56-fb2fd0fef1e2/nmstate-webhook/0.log" Jan 29 09:18:13 crc kubenswrapper[4895]: I0129 09:18:13.938105 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-68xht_3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f/kube-rbac-proxy/0.log" Jan 29 09:18:14 crc kubenswrapper[4895]: I0129 09:18:14.157945 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-68xht_3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f/controller/0.log" Jan 29 09:18:14 crc kubenswrapper[4895]: I0129 09:18:14.291767 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-frr-files/0.log" Jan 29 09:18:14 crc kubenswrapper[4895]: I0129 09:18:14.478130 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-metrics/0.log" Jan 29 09:18:14 crc kubenswrapper[4895]: I0129 09:18:14.517862 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-frr-files/0.log" Jan 29 09:18:14 crc kubenswrapper[4895]: I0129 09:18:14.549888 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-reloader/0.log" Jan 29 09:18:14 crc kubenswrapper[4895]: I0129 09:18:14.554002 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-reloader/0.log" Jan 29 09:18:14 crc kubenswrapper[4895]: I0129 09:18:14.737207 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-frr-files/0.log" Jan 29 09:18:14 crc kubenswrapper[4895]: I0129 09:18:14.768163 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-reloader/0.log" Jan 29 09:18:14 crc kubenswrapper[4895]: I0129 09:18:14.787811 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-metrics/0.log" Jan 29 09:18:14 crc kubenswrapper[4895]: I0129 09:18:14.799842 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-metrics/0.log" Jan 29 09:18:15 crc kubenswrapper[4895]: I0129 09:18:15.020479 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-frr-files/0.log" Jan 29 09:18:15 crc kubenswrapper[4895]: I0129 09:18:15.053976 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-metrics/0.log" Jan 29 09:18:15 crc kubenswrapper[4895]: I0129 09:18:15.072476 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/controller/0.log" Jan 29 09:18:15 crc kubenswrapper[4895]: I0129 09:18:15.090679 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-reloader/0.log" Jan 29 09:18:15 crc kubenswrapper[4895]: I0129 09:18:15.259234 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/frr-metrics/0.log" Jan 29 09:18:15 crc kubenswrapper[4895]: I0129 09:18:15.368690 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/kube-rbac-proxy/0.log" Jan 29 09:18:15 crc kubenswrapper[4895]: I0129 09:18:15.390621 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/kube-rbac-proxy-frr/0.log" Jan 29 09:18:15 crc kubenswrapper[4895]: I0129 09:18:15.530026 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/reloader/0.log" Jan 29 09:18:15 crc kubenswrapper[4895]: I0129 09:18:15.687531 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-jm6jg_5d4d4832-512a-4d5c-b6ea-8a90b2ad3297/frr-k8s-webhook-server/0.log" Jan 29 09:18:15 crc kubenswrapper[4895]: I0129 09:18:15.935212 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6695fc676d-4fxsl_d82c9dec-3917-4cb6-91f0-ee9b6ab253e7/manager/0.log" Jan 29 09:18:16 crc kubenswrapper[4895]: I0129 09:18:16.020164 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:18:16 crc kubenswrapper[4895]: I0129 09:18:16.020311 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:18:16 crc kubenswrapper[4895]: I0129 09:18:16.108014 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-659bffd789-lt6hz_5af74c68-7b32-4db6-97b7-35cdcd2e9504/webhook-server/0.log" Jan 29 09:18:16 crc kubenswrapper[4895]: I0129 09:18:16.264432 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vpgqh_622e6489-4886-4658-b155-3c0d9cf63fbb/kube-rbac-proxy/0.log" Jan 29 09:18:16 crc kubenswrapper[4895]: I0129 09:18:16.336504 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/frr/0.log" Jan 29 09:18:16 crc kubenswrapper[4895]: I0129 09:18:16.709795 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vpgqh_622e6489-4886-4658-b155-3c0d9cf63fbb/speaker/0.log" Jan 29 09:18:33 crc kubenswrapper[4895]: I0129 09:18:33.513210 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh_0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709/util/0.log" Jan 29 09:18:33 crc kubenswrapper[4895]: I0129 09:18:33.651610 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh_0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709/util/0.log" Jan 29 09:18:33 crc kubenswrapper[4895]: I0129 09:18:33.716160 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh_0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709/pull/0.log" Jan 29 09:18:33 crc kubenswrapper[4895]: I0129 09:18:33.782254 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh_0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709/pull/0.log" Jan 29 09:18:34 crc kubenswrapper[4895]: I0129 09:18:34.005762 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh_0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709/extract/0.log" Jan 29 09:18:34 crc kubenswrapper[4895]: I0129 09:18:34.039219 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh_0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709/pull/0.log" Jan 29 09:18:34 crc kubenswrapper[4895]: I0129 09:18:34.047553 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh_0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709/util/0.log" Jan 29 09:18:34 crc kubenswrapper[4895]: I0129 09:18:34.224762 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt_31f401a0-5ab9-427e-a086-8099fe28462f/util/0.log" Jan 29 09:18:34 crc kubenswrapper[4895]: I0129 09:18:34.430209 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt_31f401a0-5ab9-427e-a086-8099fe28462f/util/0.log" Jan 29 09:18:34 crc kubenswrapper[4895]: I0129 09:18:34.452866 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt_31f401a0-5ab9-427e-a086-8099fe28462f/pull/0.log" Jan 29 09:18:34 crc kubenswrapper[4895]: I0129 09:18:34.527436 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt_31f401a0-5ab9-427e-a086-8099fe28462f/pull/0.log" Jan 29 09:18:34 crc kubenswrapper[4895]: I0129 09:18:34.991663 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt_31f401a0-5ab9-427e-a086-8099fe28462f/util/0.log" Jan 29 09:18:35 crc kubenswrapper[4895]: I0129 09:18:35.020671 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt_31f401a0-5ab9-427e-a086-8099fe28462f/extract/0.log" Jan 29 09:18:35 crc kubenswrapper[4895]: I0129 09:18:35.022934 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt_31f401a0-5ab9-427e-a086-8099fe28462f/pull/0.log" Jan 29 09:18:35 crc kubenswrapper[4895]: I0129 09:18:35.213538 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsrdx_8276d7a1-1274-4d85-9243-ae6b7984ef52/extract-utilities/0.log" Jan 29 09:18:35 crc kubenswrapper[4895]: I0129 09:18:35.528762 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsrdx_8276d7a1-1274-4d85-9243-ae6b7984ef52/extract-content/0.log" Jan 29 09:18:35 crc kubenswrapper[4895]: I0129 09:18:35.570777 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsrdx_8276d7a1-1274-4d85-9243-ae6b7984ef52/extract-utilities/0.log" Jan 29 09:18:35 crc kubenswrapper[4895]: I0129 09:18:35.581393 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsrdx_8276d7a1-1274-4d85-9243-ae6b7984ef52/extract-content/0.log" Jan 29 09:18:36 crc kubenswrapper[4895]: I0129 09:18:36.274083 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsrdx_8276d7a1-1274-4d85-9243-ae6b7984ef52/extract-utilities/0.log" Jan 29 09:18:36 crc kubenswrapper[4895]: I0129 09:18:36.404719 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsrdx_8276d7a1-1274-4d85-9243-ae6b7984ef52/extract-content/0.log" Jan 29 09:18:36 crc kubenswrapper[4895]: I0129 09:18:36.735332 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4jxzr_399e86e5-8d5a-4663-8ce4-a919dd6f6333/extract-utilities/0.log" Jan 29 09:18:36 crc kubenswrapper[4895]: I0129 09:18:36.750629 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsrdx_8276d7a1-1274-4d85-9243-ae6b7984ef52/registry-server/0.log" Jan 29 09:18:36 crc kubenswrapper[4895]: I0129 09:18:36.772263 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4jxzr_399e86e5-8d5a-4663-8ce4-a919dd6f6333/extract-utilities/0.log" Jan 29 09:18:36 crc kubenswrapper[4895]: I0129 09:18:36.820438 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4jxzr_399e86e5-8d5a-4663-8ce4-a919dd6f6333/extract-content/0.log" Jan 29 09:18:36 crc kubenswrapper[4895]: I0129 09:18:36.915546 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4jxzr_399e86e5-8d5a-4663-8ce4-a919dd6f6333/extract-content/0.log" Jan 29 09:18:37 crc kubenswrapper[4895]: I0129 09:18:37.178222 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4jxzr_399e86e5-8d5a-4663-8ce4-a919dd6f6333/extract-utilities/0.log" Jan 29 09:18:37 crc kubenswrapper[4895]: I0129 09:18:37.196772 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4jxzr_399e86e5-8d5a-4663-8ce4-a919dd6f6333/extract-content/0.log" Jan 29 09:18:37 crc kubenswrapper[4895]: I0129 09:18:37.516316 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cdbdn_763fcf96-02dd-48dd-a5b0-40714be2a672/marketplace-operator/0.log" Jan 29 09:18:37 crc kubenswrapper[4895]: I0129 09:18:37.525578 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4jxzr_399e86e5-8d5a-4663-8ce4-a919dd6f6333/registry-server/0.log" Jan 29 09:18:37 crc kubenswrapper[4895]: I0129 09:18:37.584341 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btnwv_7274a4b5-8d6d-4743-b3ca-b1c3be13abbb/extract-utilities/0.log" Jan 29 09:18:37 crc kubenswrapper[4895]: I0129 09:18:37.870393 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btnwv_7274a4b5-8d6d-4743-b3ca-b1c3be13abbb/extract-content/0.log" Jan 29 09:18:37 crc kubenswrapper[4895]: I0129 09:18:37.876209 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btnwv_7274a4b5-8d6d-4743-b3ca-b1c3be13abbb/extract-utilities/0.log" Jan 29 09:18:37 crc kubenswrapper[4895]: I0129 09:18:37.924897 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btnwv_7274a4b5-8d6d-4743-b3ca-b1c3be13abbb/extract-content/0.log" Jan 29 09:18:38 crc kubenswrapper[4895]: I0129 09:18:38.099517 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btnwv_7274a4b5-8d6d-4743-b3ca-b1c3be13abbb/extract-utilities/0.log" Jan 29 09:18:38 crc kubenswrapper[4895]: I0129 09:18:38.118137 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btnwv_7274a4b5-8d6d-4743-b3ca-b1c3be13abbb/extract-content/0.log" Jan 29 09:18:38 crc kubenswrapper[4895]: I0129 09:18:38.260086 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btnwv_7274a4b5-8d6d-4743-b3ca-b1c3be13abbb/registry-server/0.log" Jan 29 09:18:38 crc kubenswrapper[4895]: I0129 09:18:38.349948 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrslr_0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55/extract-utilities/0.log" Jan 29 09:18:38 crc kubenswrapper[4895]: I0129 09:18:38.630225 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrslr_0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55/extract-content/0.log" Jan 29 09:18:38 crc kubenswrapper[4895]: I0129 09:18:38.673364 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrslr_0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55/extract-utilities/0.log" Jan 29 09:18:38 crc kubenswrapper[4895]: I0129 09:18:38.685834 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrslr_0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55/extract-content/0.log" Jan 29 09:18:38 crc kubenswrapper[4895]: I0129 09:18:38.882383 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrslr_0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55/extract-utilities/0.log" Jan 29 09:18:39 crc kubenswrapper[4895]: I0129 09:18:39.502620 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrslr_0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55/extract-content/0.log" Jan 29 09:18:40 crc kubenswrapper[4895]: I0129 09:18:40.175995 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrslr_0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55/registry-server/0.log" Jan 29 09:18:46 crc kubenswrapper[4895]: I0129 09:18:46.020459 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:18:46 crc kubenswrapper[4895]: I0129 09:18:46.021308 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:19:16 crc kubenswrapper[4895]: I0129 09:19:16.021137 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:19:16 crc kubenswrapper[4895]: I0129 09:19:16.021780 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:19:16 crc kubenswrapper[4895]: I0129 09:19:16.021863 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 09:19:16 crc kubenswrapper[4895]: I0129 09:19:16.023139 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc"} pod="openshift-machine-config-operator/machine-config-daemon-z82hk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:19:16 crc kubenswrapper[4895]: I0129 09:19:16.023217 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" containerID="cri-o://79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" gracePeriod=600 Jan 29 09:19:16 crc kubenswrapper[4895]: E0129 09:19:16.147378 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:19:16 crc kubenswrapper[4895]: I0129 09:19:16.988621 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" exitCode=0 Jan 29 09:19:16 crc kubenswrapper[4895]: I0129 09:19:16.988720 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerDied","Data":"79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc"} Jan 29 09:19:16 crc kubenswrapper[4895]: I0129 09:19:16.988988 4895 scope.go:117] "RemoveContainer" containerID="4ebd8df48bf0fb45ec83588a7131b73c01eddb922c6408acfd935865f5db90c6" Jan 29 09:19:16 crc kubenswrapper[4895]: I0129 09:19:16.990302 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:19:16 crc kubenswrapper[4895]: E0129 09:19:16.990821 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:19:22 crc kubenswrapper[4895]: E0129 09:19:22.657953 4895 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.142:40604->38.129.56.142:46589: write tcp 38.129.56.142:40604->38.129.56.142:46589: write: broken pipe Jan 29 09:19:28 crc kubenswrapper[4895]: I0129 09:19:28.211502 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:19:28 crc kubenswrapper[4895]: E0129 09:19:28.212626 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:19:39 crc kubenswrapper[4895]: I0129 09:19:39.225386 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:19:39 crc kubenswrapper[4895]: E0129 09:19:39.228130 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:19:50 crc kubenswrapper[4895]: I0129 09:19:50.211773 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:19:50 crc kubenswrapper[4895]: E0129 09:19:50.213359 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:20:02 crc kubenswrapper[4895]: I0129 09:20:02.211695 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:20:02 crc kubenswrapper[4895]: E0129 09:20:02.214189 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:20:15 crc kubenswrapper[4895]: I0129 09:20:15.211724 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:20:15 crc kubenswrapper[4895]: E0129 09:20:15.214573 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:20:29 crc kubenswrapper[4895]: I0129 09:20:29.217590 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:20:29 crc kubenswrapper[4895]: E0129 09:20:29.218586 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:20:36 crc kubenswrapper[4895]: I0129 09:20:36.845237 4895 generic.go:334] "Generic (PLEG): container finished" podID="98b775ab-6e2f-42a9-91f7-1952e78a0dff" containerID="2ceedf1250478e3c5ff09b2447078f815039aef773cd3b90129cbfff1fab9053" exitCode=0 Jan 29 09:20:36 crc kubenswrapper[4895]: I0129 09:20:36.845301 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5dzdv/must-gather-t4lwx" event={"ID":"98b775ab-6e2f-42a9-91f7-1952e78a0dff","Type":"ContainerDied","Data":"2ceedf1250478e3c5ff09b2447078f815039aef773cd3b90129cbfff1fab9053"} Jan 29 09:20:36 crc kubenswrapper[4895]: I0129 09:20:36.846980 4895 scope.go:117] "RemoveContainer" containerID="2ceedf1250478e3c5ff09b2447078f815039aef773cd3b90129cbfff1fab9053" Jan 29 09:20:37 crc kubenswrapper[4895]: I0129 09:20:37.203427 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5dzdv_must-gather-t4lwx_98b775ab-6e2f-42a9-91f7-1952e78a0dff/gather/0.log" Jan 29 09:20:43 crc kubenswrapper[4895]: I0129 09:20:43.375850 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:20:43 crc kubenswrapper[4895]: E0129 09:20:43.377394 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:20:45 crc kubenswrapper[4895]: I0129 09:20:45.947213 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5dzdv/must-gather-t4lwx"] Jan 29 09:20:45 crc kubenswrapper[4895]: I0129 09:20:45.948326 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5dzdv/must-gather-t4lwx" podUID="98b775ab-6e2f-42a9-91f7-1952e78a0dff" containerName="copy" containerID="cri-o://88934454c3a3dcb90a696002abb4df5a9f624b6a51eeafb09cc207e40696c842" gracePeriod=2 Jan 29 09:20:45 crc kubenswrapper[4895]: I0129 09:20:45.956352 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5dzdv/must-gather-t4lwx"] Jan 29 09:20:46 crc kubenswrapper[4895]: I0129 09:20:46.462274 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5dzdv_must-gather-t4lwx_98b775ab-6e2f-42a9-91f7-1952e78a0dff/copy/0.log" Jan 29 09:20:46 crc kubenswrapper[4895]: I0129 09:20:46.463168 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/must-gather-t4lwx" Jan 29 09:20:46 crc kubenswrapper[4895]: I0129 09:20:46.572679 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thlmz\" (UniqueName: \"kubernetes.io/projected/98b775ab-6e2f-42a9-91f7-1952e78a0dff-kube-api-access-thlmz\") pod \"98b775ab-6e2f-42a9-91f7-1952e78a0dff\" (UID: \"98b775ab-6e2f-42a9-91f7-1952e78a0dff\") " Jan 29 09:20:46 crc kubenswrapper[4895]: I0129 09:20:46.572960 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98b775ab-6e2f-42a9-91f7-1952e78a0dff-must-gather-output\") pod \"98b775ab-6e2f-42a9-91f7-1952e78a0dff\" (UID: \"98b775ab-6e2f-42a9-91f7-1952e78a0dff\") " Jan 29 09:20:46 crc kubenswrapper[4895]: I0129 09:20:46.584493 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98b775ab-6e2f-42a9-91f7-1952e78a0dff-kube-api-access-thlmz" (OuterVolumeSpecName: "kube-api-access-thlmz") pod "98b775ab-6e2f-42a9-91f7-1952e78a0dff" (UID: "98b775ab-6e2f-42a9-91f7-1952e78a0dff"). InnerVolumeSpecName "kube-api-access-thlmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:20:46 crc kubenswrapper[4895]: I0129 09:20:46.675722 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thlmz\" (UniqueName: \"kubernetes.io/projected/98b775ab-6e2f-42a9-91f7-1952e78a0dff-kube-api-access-thlmz\") on node \"crc\" DevicePath \"\"" Jan 29 09:20:46 crc kubenswrapper[4895]: I0129 09:20:46.727660 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98b775ab-6e2f-42a9-91f7-1952e78a0dff-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "98b775ab-6e2f-42a9-91f7-1952e78a0dff" (UID: "98b775ab-6e2f-42a9-91f7-1952e78a0dff"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:20:46 crc kubenswrapper[4895]: I0129 09:20:46.778139 4895 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98b775ab-6e2f-42a9-91f7-1952e78a0dff-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 09:20:46 crc kubenswrapper[4895]: I0129 09:20:46.943328 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5dzdv_must-gather-t4lwx_98b775ab-6e2f-42a9-91f7-1952e78a0dff/copy/0.log" Jan 29 09:20:46 crc kubenswrapper[4895]: I0129 09:20:46.944226 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5dzdv/must-gather-t4lwx" Jan 29 09:20:46 crc kubenswrapper[4895]: I0129 09:20:46.944371 4895 scope.go:117] "RemoveContainer" containerID="88934454c3a3dcb90a696002abb4df5a9f624b6a51eeafb09cc207e40696c842" Jan 29 09:20:46 crc kubenswrapper[4895]: I0129 09:20:46.944110 4895 generic.go:334] "Generic (PLEG): container finished" podID="98b775ab-6e2f-42a9-91f7-1952e78a0dff" containerID="88934454c3a3dcb90a696002abb4df5a9f624b6a51eeafb09cc207e40696c842" exitCode=143 Jan 29 09:20:46 crc kubenswrapper[4895]: I0129 09:20:46.968628 4895 scope.go:117] "RemoveContainer" containerID="2ceedf1250478e3c5ff09b2447078f815039aef773cd3b90129cbfff1fab9053" Jan 29 09:20:47 crc kubenswrapper[4895]: I0129 09:20:47.064633 4895 scope.go:117] "RemoveContainer" containerID="88934454c3a3dcb90a696002abb4df5a9f624b6a51eeafb09cc207e40696c842" Jan 29 09:20:47 crc kubenswrapper[4895]: E0129 09:20:47.065487 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88934454c3a3dcb90a696002abb4df5a9f624b6a51eeafb09cc207e40696c842\": container with ID starting with 88934454c3a3dcb90a696002abb4df5a9f624b6a51eeafb09cc207e40696c842 not found: ID does not exist" containerID="88934454c3a3dcb90a696002abb4df5a9f624b6a51eeafb09cc207e40696c842" Jan 29 09:20:47 crc kubenswrapper[4895]: I0129 09:20:47.065539 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88934454c3a3dcb90a696002abb4df5a9f624b6a51eeafb09cc207e40696c842"} err="failed to get container status \"88934454c3a3dcb90a696002abb4df5a9f624b6a51eeafb09cc207e40696c842\": rpc error: code = NotFound desc = could not find container \"88934454c3a3dcb90a696002abb4df5a9f624b6a51eeafb09cc207e40696c842\": container with ID starting with 88934454c3a3dcb90a696002abb4df5a9f624b6a51eeafb09cc207e40696c842 not found: ID does not exist" Jan 29 09:20:47 crc kubenswrapper[4895]: I0129 09:20:47.065567 4895 scope.go:117] "RemoveContainer" containerID="2ceedf1250478e3c5ff09b2447078f815039aef773cd3b90129cbfff1fab9053" Jan 29 09:20:47 crc kubenswrapper[4895]: E0129 09:20:47.066096 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ceedf1250478e3c5ff09b2447078f815039aef773cd3b90129cbfff1fab9053\": container with ID starting with 2ceedf1250478e3c5ff09b2447078f815039aef773cd3b90129cbfff1fab9053 not found: ID does not exist" containerID="2ceedf1250478e3c5ff09b2447078f815039aef773cd3b90129cbfff1fab9053" Jan 29 09:20:47 crc kubenswrapper[4895]: I0129 09:20:47.066166 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ceedf1250478e3c5ff09b2447078f815039aef773cd3b90129cbfff1fab9053"} err="failed to get container status \"2ceedf1250478e3c5ff09b2447078f815039aef773cd3b90129cbfff1fab9053\": rpc error: code = NotFound desc = could not find container \"2ceedf1250478e3c5ff09b2447078f815039aef773cd3b90129cbfff1fab9053\": container with ID starting with 2ceedf1250478e3c5ff09b2447078f815039aef773cd3b90129cbfff1fab9053 not found: ID does not exist" Jan 29 09:20:47 crc kubenswrapper[4895]: I0129 09:20:47.224957 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98b775ab-6e2f-42a9-91f7-1952e78a0dff" path="/var/lib/kubelet/pods/98b775ab-6e2f-42a9-91f7-1952e78a0dff/volumes" Jan 29 09:20:57 crc kubenswrapper[4895]: I0129 09:20:57.212337 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:20:57 crc kubenswrapper[4895]: E0129 09:20:57.213497 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:21:06 crc kubenswrapper[4895]: I0129 09:21:06.868859 4895 scope.go:117] "RemoveContainer" containerID="4965c08c364131288b31f894482c47883ffd193d08f6569c6e49cdb6e5709ac1" Jan 29 09:21:11 crc kubenswrapper[4895]: I0129 09:21:11.213513 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:21:11 crc kubenswrapper[4895]: E0129 09:21:11.214651 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:21:23 crc kubenswrapper[4895]: I0129 09:21:23.211997 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:21:23 crc kubenswrapper[4895]: E0129 09:21:23.213207 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:21:36 crc kubenswrapper[4895]: I0129 09:21:36.212661 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:21:36 crc kubenswrapper[4895]: E0129 09:21:36.213979 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:21:48 crc kubenswrapper[4895]: I0129 09:21:48.212255 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:21:48 crc kubenswrapper[4895]: E0129 09:21:48.213278 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:22:01 crc kubenswrapper[4895]: I0129 09:22:01.212110 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:22:01 crc kubenswrapper[4895]: E0129 09:22:01.214703 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:22:15 crc kubenswrapper[4895]: I0129 09:22:15.212289 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:22:15 crc kubenswrapper[4895]: E0129 09:22:15.214962 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:22:27 crc kubenswrapper[4895]: I0129 09:22:27.213599 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:22:27 crc kubenswrapper[4895]: E0129 09:22:27.214828 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:22:40 crc kubenswrapper[4895]: I0129 09:22:40.212104 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:22:40 crc kubenswrapper[4895]: E0129 09:22:40.213151 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:22:53 crc kubenswrapper[4895]: I0129 09:22:53.211198 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:22:53 crc kubenswrapper[4895]: E0129 09:22:53.212182 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:23:07 crc kubenswrapper[4895]: I0129 09:23:07.212528 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:23:07 crc kubenswrapper[4895]: E0129 09:23:07.213528 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:23:10 crc kubenswrapper[4895]: I0129 09:23:10.100418 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5fb7b47b77-cq2p9" podUID="cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 29 09:23:21 crc kubenswrapper[4895]: I0129 09:23:21.212033 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:23:21 crc kubenswrapper[4895]: E0129 09:23:21.213102 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.587142 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vwbt4/must-gather-msszk"] Jan 29 09:23:29 crc kubenswrapper[4895]: E0129 09:23:29.588394 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6b5992-595d-4753-8cf6-0a9f2e1e438b" containerName="container-00" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.588409 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6b5992-595d-4753-8cf6-0a9f2e1e438b" containerName="container-00" Jan 29 09:23:29 crc kubenswrapper[4895]: E0129 09:23:29.588438 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b775ab-6e2f-42a9-91f7-1952e78a0dff" containerName="copy" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.588444 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b775ab-6e2f-42a9-91f7-1952e78a0dff" containerName="copy" Jan 29 09:23:29 crc kubenswrapper[4895]: E0129 09:23:29.588453 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b775ab-6e2f-42a9-91f7-1952e78a0dff" containerName="gather" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.588460 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b775ab-6e2f-42a9-91f7-1952e78a0dff" containerName="gather" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.588671 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b775ab-6e2f-42a9-91f7-1952e78a0dff" containerName="copy" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.588693 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b775ab-6e2f-42a9-91f7-1952e78a0dff" containerName="gather" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.588708 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6b5992-595d-4753-8cf6-0a9f2e1e438b" containerName="container-00" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.589951 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/must-gather-msszk" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.597651 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vwbt4"/"default-dockercfg-zzd4t" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.598356 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vwbt4"/"kube-root-ca.crt" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.607522 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vwbt4"/"openshift-service-ca.crt" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.628795 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vwbt4/must-gather-msszk"] Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.677311 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f1dbd138-f7a8-41a1-9720-8d89c6276e2d-must-gather-output\") pod \"must-gather-msszk\" (UID: \"f1dbd138-f7a8-41a1-9720-8d89c6276e2d\") " pod="openshift-must-gather-vwbt4/must-gather-msszk" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.677395 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lccmm\" (UniqueName: \"kubernetes.io/projected/f1dbd138-f7a8-41a1-9720-8d89c6276e2d-kube-api-access-lccmm\") pod \"must-gather-msszk\" (UID: \"f1dbd138-f7a8-41a1-9720-8d89c6276e2d\") " pod="openshift-must-gather-vwbt4/must-gather-msszk" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.779698 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f1dbd138-f7a8-41a1-9720-8d89c6276e2d-must-gather-output\") pod \"must-gather-msszk\" (UID: \"f1dbd138-f7a8-41a1-9720-8d89c6276e2d\") " pod="openshift-must-gather-vwbt4/must-gather-msszk" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.780183 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lccmm\" (UniqueName: \"kubernetes.io/projected/f1dbd138-f7a8-41a1-9720-8d89c6276e2d-kube-api-access-lccmm\") pod \"must-gather-msszk\" (UID: \"f1dbd138-f7a8-41a1-9720-8d89c6276e2d\") " pod="openshift-must-gather-vwbt4/must-gather-msszk" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.780262 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f1dbd138-f7a8-41a1-9720-8d89c6276e2d-must-gather-output\") pod \"must-gather-msszk\" (UID: \"f1dbd138-f7a8-41a1-9720-8d89c6276e2d\") " pod="openshift-must-gather-vwbt4/must-gather-msszk" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.812099 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lccmm\" (UniqueName: \"kubernetes.io/projected/f1dbd138-f7a8-41a1-9720-8d89c6276e2d-kube-api-access-lccmm\") pod \"must-gather-msszk\" (UID: \"f1dbd138-f7a8-41a1-9720-8d89c6276e2d\") " pod="openshift-must-gather-vwbt4/must-gather-msszk" Jan 29 09:23:29 crc kubenswrapper[4895]: I0129 09:23:29.921745 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/must-gather-msszk" Jan 29 09:23:30 crc kubenswrapper[4895]: I0129 09:23:30.483819 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vwbt4/must-gather-msszk"] Jan 29 09:23:31 crc kubenswrapper[4895]: I0129 09:23:31.169315 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwbt4/must-gather-msszk" event={"ID":"f1dbd138-f7a8-41a1-9720-8d89c6276e2d","Type":"ContainerStarted","Data":"ed304213532749f097a3aff8cd36277133e7b7cc8afd3061fbdea936acdaa0a6"} Jan 29 09:23:31 crc kubenswrapper[4895]: I0129 09:23:31.169809 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwbt4/must-gather-msszk" event={"ID":"f1dbd138-f7a8-41a1-9720-8d89c6276e2d","Type":"ContainerStarted","Data":"b224a0c19cb77bdb9f3ff898fdbf36493a8a3b46f6be195e0e0a2e1a6e155ab8"} Jan 29 09:23:32 crc kubenswrapper[4895]: I0129 09:23:32.193543 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwbt4/must-gather-msszk" event={"ID":"f1dbd138-f7a8-41a1-9720-8d89c6276e2d","Type":"ContainerStarted","Data":"cedad95642f0e851f8987a53eaa54ffdc53a7d39e5524d6e65db0bccc7e65db7"} Jan 29 09:23:32 crc kubenswrapper[4895]: I0129 09:23:32.218495 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vwbt4/must-gather-msszk" podStartSLOduration=3.217382009 podStartE2EDuration="3.217382009s" podCreationTimestamp="2026-01-29 09:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:23:32.212054037 +0000 UTC m=+2553.853562203" watchObservedRunningTime="2026-01-29 09:23:32.217382009 +0000 UTC m=+2553.858890165" Jan 29 09:23:33 crc kubenswrapper[4895]: I0129 09:23:33.212124 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:23:33 crc kubenswrapper[4895]: E0129 09:23:33.212720 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:23:35 crc kubenswrapper[4895]: I0129 09:23:35.530827 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vwbt4/crc-debug-sh6zl"] Jan 29 09:23:35 crc kubenswrapper[4895]: I0129 09:23:35.533279 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/crc-debug-sh6zl" Jan 29 09:23:35 crc kubenswrapper[4895]: I0129 09:23:35.606687 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21de9a92-9e39-4f57-8a76-8ac5b9175d40-host\") pod \"crc-debug-sh6zl\" (UID: \"21de9a92-9e39-4f57-8a76-8ac5b9175d40\") " pod="openshift-must-gather-vwbt4/crc-debug-sh6zl" Jan 29 09:23:35 crc kubenswrapper[4895]: I0129 09:23:35.607091 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwjz7\" (UniqueName: \"kubernetes.io/projected/21de9a92-9e39-4f57-8a76-8ac5b9175d40-kube-api-access-fwjz7\") pod \"crc-debug-sh6zl\" (UID: \"21de9a92-9e39-4f57-8a76-8ac5b9175d40\") " pod="openshift-must-gather-vwbt4/crc-debug-sh6zl" Jan 29 09:23:35 crc kubenswrapper[4895]: I0129 09:23:35.710357 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21de9a92-9e39-4f57-8a76-8ac5b9175d40-host\") pod \"crc-debug-sh6zl\" (UID: \"21de9a92-9e39-4f57-8a76-8ac5b9175d40\") " pod="openshift-must-gather-vwbt4/crc-debug-sh6zl" Jan 29 09:23:35 crc kubenswrapper[4895]: I0129 09:23:35.710567 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwjz7\" (UniqueName: \"kubernetes.io/projected/21de9a92-9e39-4f57-8a76-8ac5b9175d40-kube-api-access-fwjz7\") pod \"crc-debug-sh6zl\" (UID: \"21de9a92-9e39-4f57-8a76-8ac5b9175d40\") " pod="openshift-must-gather-vwbt4/crc-debug-sh6zl" Jan 29 09:23:35 crc kubenswrapper[4895]: I0129 09:23:35.710572 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21de9a92-9e39-4f57-8a76-8ac5b9175d40-host\") pod \"crc-debug-sh6zl\" (UID: \"21de9a92-9e39-4f57-8a76-8ac5b9175d40\") " pod="openshift-must-gather-vwbt4/crc-debug-sh6zl" Jan 29 09:23:35 crc kubenswrapper[4895]: I0129 09:23:35.741393 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwjz7\" (UniqueName: \"kubernetes.io/projected/21de9a92-9e39-4f57-8a76-8ac5b9175d40-kube-api-access-fwjz7\") pod \"crc-debug-sh6zl\" (UID: \"21de9a92-9e39-4f57-8a76-8ac5b9175d40\") " pod="openshift-must-gather-vwbt4/crc-debug-sh6zl" Jan 29 09:23:35 crc kubenswrapper[4895]: I0129 09:23:35.861187 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/crc-debug-sh6zl" Jan 29 09:23:36 crc kubenswrapper[4895]: I0129 09:23:36.331987 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwbt4/crc-debug-sh6zl" event={"ID":"21de9a92-9e39-4f57-8a76-8ac5b9175d40","Type":"ContainerStarted","Data":"9c4b9afdea956dd38d2e7721dc00470bf6df0a4bb6f0e1aba5fcdaabbe6f9f8d"} Jan 29 09:23:36 crc kubenswrapper[4895]: I0129 09:23:36.332604 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwbt4/crc-debug-sh6zl" event={"ID":"21de9a92-9e39-4f57-8a76-8ac5b9175d40","Type":"ContainerStarted","Data":"b29af9795f33d8d8c35b1846ff31e1179bf178819f9f9905e67064ef2b4c0de9"} Jan 29 09:23:36 crc kubenswrapper[4895]: I0129 09:23:36.354907 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vwbt4/crc-debug-sh6zl" podStartSLOduration=1.354882983 podStartE2EDuration="1.354882983s" podCreationTimestamp="2026-01-29 09:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:23:36.351387139 +0000 UTC m=+2557.992895285" watchObservedRunningTime="2026-01-29 09:23:36.354882983 +0000 UTC m=+2557.996391129" Jan 29 09:23:46 crc kubenswrapper[4895]: I0129 09:23:46.212522 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:23:46 crc kubenswrapper[4895]: E0129 09:23:46.215522 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:23:56 crc kubenswrapper[4895]: I0129 09:23:56.264231 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rnwjn"] Jan 29 09:23:56 crc kubenswrapper[4895]: I0129 09:23:56.269327 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:23:56 crc kubenswrapper[4895]: I0129 09:23:56.285720 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnwjn"] Jan 29 09:23:56 crc kubenswrapper[4895]: I0129 09:23:56.321733 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395a35ee-dcd1-4695-b398-ba6027f5b082-catalog-content\") pod \"certified-operators-rnwjn\" (UID: \"395a35ee-dcd1-4695-b398-ba6027f5b082\") " pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:23:56 crc kubenswrapper[4895]: I0129 09:23:56.321817 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395a35ee-dcd1-4695-b398-ba6027f5b082-utilities\") pod \"certified-operators-rnwjn\" (UID: \"395a35ee-dcd1-4695-b398-ba6027f5b082\") " pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:23:56 crc kubenswrapper[4895]: I0129 09:23:56.321865 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2qm8\" (UniqueName: \"kubernetes.io/projected/395a35ee-dcd1-4695-b398-ba6027f5b082-kube-api-access-m2qm8\") pod \"certified-operators-rnwjn\" (UID: \"395a35ee-dcd1-4695-b398-ba6027f5b082\") " pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:23:56 crc kubenswrapper[4895]: I0129 09:23:56.424413 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395a35ee-dcd1-4695-b398-ba6027f5b082-catalog-content\") pod \"certified-operators-rnwjn\" (UID: \"395a35ee-dcd1-4695-b398-ba6027f5b082\") " pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:23:56 crc kubenswrapper[4895]: I0129 09:23:56.424538 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395a35ee-dcd1-4695-b398-ba6027f5b082-utilities\") pod \"certified-operators-rnwjn\" (UID: \"395a35ee-dcd1-4695-b398-ba6027f5b082\") " pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:23:56 crc kubenswrapper[4895]: I0129 09:23:56.424573 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2qm8\" (UniqueName: \"kubernetes.io/projected/395a35ee-dcd1-4695-b398-ba6027f5b082-kube-api-access-m2qm8\") pod \"certified-operators-rnwjn\" (UID: \"395a35ee-dcd1-4695-b398-ba6027f5b082\") " pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:23:56 crc kubenswrapper[4895]: I0129 09:23:56.425958 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395a35ee-dcd1-4695-b398-ba6027f5b082-catalog-content\") pod \"certified-operators-rnwjn\" (UID: \"395a35ee-dcd1-4695-b398-ba6027f5b082\") " pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:23:56 crc kubenswrapper[4895]: I0129 09:23:56.426259 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395a35ee-dcd1-4695-b398-ba6027f5b082-utilities\") pod \"certified-operators-rnwjn\" (UID: \"395a35ee-dcd1-4695-b398-ba6027f5b082\") " pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:23:56 crc kubenswrapper[4895]: I0129 09:23:56.469932 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2qm8\" (UniqueName: \"kubernetes.io/projected/395a35ee-dcd1-4695-b398-ba6027f5b082-kube-api-access-m2qm8\") pod \"certified-operators-rnwjn\" (UID: \"395a35ee-dcd1-4695-b398-ba6027f5b082\") " pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:23:56 crc kubenswrapper[4895]: I0129 09:23:56.604537 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:23:57 crc kubenswrapper[4895]: I0129 09:23:57.200611 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnwjn"] Jan 29 09:23:57 crc kubenswrapper[4895]: I0129 09:23:57.242895 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnwjn" event={"ID":"395a35ee-dcd1-4695-b398-ba6027f5b082","Type":"ContainerStarted","Data":"a5d86f16eb6098e83fbf5695ec0c7a6777bbf0bd08e819f9de3e5eb32cbc363f"} Jan 29 09:23:58 crc kubenswrapper[4895]: I0129 09:23:58.214248 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:23:58 crc kubenswrapper[4895]: E0129 09:23:58.215018 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:23:58 crc kubenswrapper[4895]: I0129 09:23:58.238017 4895 generic.go:334] "Generic (PLEG): container finished" podID="395a35ee-dcd1-4695-b398-ba6027f5b082" containerID="8a819e203884e9ea332bde2bfafc37454bed52af45348f60aed872be6a35a675" exitCode=0 Jan 29 09:23:58 crc kubenswrapper[4895]: I0129 09:23:58.238132 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnwjn" event={"ID":"395a35ee-dcd1-4695-b398-ba6027f5b082","Type":"ContainerDied","Data":"8a819e203884e9ea332bde2bfafc37454bed52af45348f60aed872be6a35a675"} Jan 29 09:23:58 crc kubenswrapper[4895]: I0129 09:23:58.241711 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:23:59 crc kubenswrapper[4895]: I0129 09:23:59.260280 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnwjn" event={"ID":"395a35ee-dcd1-4695-b398-ba6027f5b082","Type":"ContainerStarted","Data":"7b9227ae8c108e9a3499183f779f4a2fe0be1ff1cdd05997d359f05b92c026d4"} Jan 29 09:24:00 crc kubenswrapper[4895]: I0129 09:24:00.272340 4895 generic.go:334] "Generic (PLEG): container finished" podID="395a35ee-dcd1-4695-b398-ba6027f5b082" containerID="7b9227ae8c108e9a3499183f779f4a2fe0be1ff1cdd05997d359f05b92c026d4" exitCode=0 Jan 29 09:24:00 crc kubenswrapper[4895]: I0129 09:24:00.272397 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnwjn" event={"ID":"395a35ee-dcd1-4695-b398-ba6027f5b082","Type":"ContainerDied","Data":"7b9227ae8c108e9a3499183f779f4a2fe0be1ff1cdd05997d359f05b92c026d4"} Jan 29 09:24:01 crc kubenswrapper[4895]: I0129 09:24:01.287281 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnwjn" event={"ID":"395a35ee-dcd1-4695-b398-ba6027f5b082","Type":"ContainerStarted","Data":"8612ed9c48dfdeea37ccf104c368b01ef17aca0b91a1a8ced49710530a760ee9"} Jan 29 09:24:01 crc kubenswrapper[4895]: I0129 09:24:01.318054 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rnwjn" podStartSLOduration=2.847123562 podStartE2EDuration="5.318019526s" podCreationTimestamp="2026-01-29 09:23:56 +0000 UTC" firstStartedPulling="2026-01-29 09:23:58.24136775 +0000 UTC m=+2579.882875896" lastFinishedPulling="2026-01-29 09:24:00.712263714 +0000 UTC m=+2582.353771860" observedRunningTime="2026-01-29 09:24:01.308225165 +0000 UTC m=+2582.949733321" watchObservedRunningTime="2026-01-29 09:24:01.318019526 +0000 UTC m=+2582.959527712" Jan 29 09:24:06 crc kubenswrapper[4895]: I0129 09:24:06.604850 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:24:06 crc kubenswrapper[4895]: I0129 09:24:06.605552 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:24:06 crc kubenswrapper[4895]: I0129 09:24:06.658873 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:24:07 crc kubenswrapper[4895]: I0129 09:24:07.402008 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:24:07 crc kubenswrapper[4895]: I0129 09:24:07.468640 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rnwjn"] Jan 29 09:24:09 crc kubenswrapper[4895]: I0129 09:24:09.376318 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rnwjn" podUID="395a35ee-dcd1-4695-b398-ba6027f5b082" containerName="registry-server" containerID="cri-o://8612ed9c48dfdeea37ccf104c368b01ef17aca0b91a1a8ced49710530a760ee9" gracePeriod=2 Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.139965 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.276327 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395a35ee-dcd1-4695-b398-ba6027f5b082-catalog-content\") pod \"395a35ee-dcd1-4695-b398-ba6027f5b082\" (UID: \"395a35ee-dcd1-4695-b398-ba6027f5b082\") " Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.276940 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395a35ee-dcd1-4695-b398-ba6027f5b082-utilities\") pod \"395a35ee-dcd1-4695-b398-ba6027f5b082\" (UID: \"395a35ee-dcd1-4695-b398-ba6027f5b082\") " Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.277119 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2qm8\" (UniqueName: \"kubernetes.io/projected/395a35ee-dcd1-4695-b398-ba6027f5b082-kube-api-access-m2qm8\") pod \"395a35ee-dcd1-4695-b398-ba6027f5b082\" (UID: \"395a35ee-dcd1-4695-b398-ba6027f5b082\") " Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.279018 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395a35ee-dcd1-4695-b398-ba6027f5b082-utilities" (OuterVolumeSpecName: "utilities") pod "395a35ee-dcd1-4695-b398-ba6027f5b082" (UID: "395a35ee-dcd1-4695-b398-ba6027f5b082"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.295298 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395a35ee-dcd1-4695-b398-ba6027f5b082-kube-api-access-m2qm8" (OuterVolumeSpecName: "kube-api-access-m2qm8") pod "395a35ee-dcd1-4695-b398-ba6027f5b082" (UID: "395a35ee-dcd1-4695-b398-ba6027f5b082"). InnerVolumeSpecName "kube-api-access-m2qm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.380729 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395a35ee-dcd1-4695-b398-ba6027f5b082-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.380792 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2qm8\" (UniqueName: \"kubernetes.io/projected/395a35ee-dcd1-4695-b398-ba6027f5b082-kube-api-access-m2qm8\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.389087 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395a35ee-dcd1-4695-b398-ba6027f5b082-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "395a35ee-dcd1-4695-b398-ba6027f5b082" (UID: "395a35ee-dcd1-4695-b398-ba6027f5b082"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.405124 4895 generic.go:334] "Generic (PLEG): container finished" podID="395a35ee-dcd1-4695-b398-ba6027f5b082" containerID="8612ed9c48dfdeea37ccf104c368b01ef17aca0b91a1a8ced49710530a760ee9" exitCode=0 Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.405184 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnwjn" event={"ID":"395a35ee-dcd1-4695-b398-ba6027f5b082","Type":"ContainerDied","Data":"8612ed9c48dfdeea37ccf104c368b01ef17aca0b91a1a8ced49710530a760ee9"} Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.405205 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnwjn" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.405233 4895 scope.go:117] "RemoveContainer" containerID="8612ed9c48dfdeea37ccf104c368b01ef17aca0b91a1a8ced49710530a760ee9" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.405219 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnwjn" event={"ID":"395a35ee-dcd1-4695-b398-ba6027f5b082","Type":"ContainerDied","Data":"a5d86f16eb6098e83fbf5695ec0c7a6777bbf0bd08e819f9de3e5eb32cbc363f"} Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.433237 4895 scope.go:117] "RemoveContainer" containerID="7b9227ae8c108e9a3499183f779f4a2fe0be1ff1cdd05997d359f05b92c026d4" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.458230 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rnwjn"] Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.470504 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rnwjn"] Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.483205 4895 scope.go:117] "RemoveContainer" containerID="8a819e203884e9ea332bde2bfafc37454bed52af45348f60aed872be6a35a675" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.486003 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395a35ee-dcd1-4695-b398-ba6027f5b082-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.524166 4895 scope.go:117] "RemoveContainer" containerID="8612ed9c48dfdeea37ccf104c368b01ef17aca0b91a1a8ced49710530a760ee9" Jan 29 09:24:10 crc kubenswrapper[4895]: E0129 09:24:10.524827 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8612ed9c48dfdeea37ccf104c368b01ef17aca0b91a1a8ced49710530a760ee9\": container with ID starting with 8612ed9c48dfdeea37ccf104c368b01ef17aca0b91a1a8ced49710530a760ee9 not found: ID does not exist" containerID="8612ed9c48dfdeea37ccf104c368b01ef17aca0b91a1a8ced49710530a760ee9" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.524881 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8612ed9c48dfdeea37ccf104c368b01ef17aca0b91a1a8ced49710530a760ee9"} err="failed to get container status \"8612ed9c48dfdeea37ccf104c368b01ef17aca0b91a1a8ced49710530a760ee9\": rpc error: code = NotFound desc = could not find container \"8612ed9c48dfdeea37ccf104c368b01ef17aca0b91a1a8ced49710530a760ee9\": container with ID starting with 8612ed9c48dfdeea37ccf104c368b01ef17aca0b91a1a8ced49710530a760ee9 not found: ID does not exist" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.524929 4895 scope.go:117] "RemoveContainer" containerID="7b9227ae8c108e9a3499183f779f4a2fe0be1ff1cdd05997d359f05b92c026d4" Jan 29 09:24:10 crc kubenswrapper[4895]: E0129 09:24:10.525527 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b9227ae8c108e9a3499183f779f4a2fe0be1ff1cdd05997d359f05b92c026d4\": container with ID starting with 7b9227ae8c108e9a3499183f779f4a2fe0be1ff1cdd05997d359f05b92c026d4 not found: ID does not exist" containerID="7b9227ae8c108e9a3499183f779f4a2fe0be1ff1cdd05997d359f05b92c026d4" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.525588 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b9227ae8c108e9a3499183f779f4a2fe0be1ff1cdd05997d359f05b92c026d4"} err="failed to get container status \"7b9227ae8c108e9a3499183f779f4a2fe0be1ff1cdd05997d359f05b92c026d4\": rpc error: code = NotFound desc = could not find container \"7b9227ae8c108e9a3499183f779f4a2fe0be1ff1cdd05997d359f05b92c026d4\": container with ID starting with 7b9227ae8c108e9a3499183f779f4a2fe0be1ff1cdd05997d359f05b92c026d4 not found: ID does not exist" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.525616 4895 scope.go:117] "RemoveContainer" containerID="8a819e203884e9ea332bde2bfafc37454bed52af45348f60aed872be6a35a675" Jan 29 09:24:10 crc kubenswrapper[4895]: E0129 09:24:10.525976 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a819e203884e9ea332bde2bfafc37454bed52af45348f60aed872be6a35a675\": container with ID starting with 8a819e203884e9ea332bde2bfafc37454bed52af45348f60aed872be6a35a675 not found: ID does not exist" containerID="8a819e203884e9ea332bde2bfafc37454bed52af45348f60aed872be6a35a675" Jan 29 09:24:10 crc kubenswrapper[4895]: I0129 09:24:10.526002 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a819e203884e9ea332bde2bfafc37454bed52af45348f60aed872be6a35a675"} err="failed to get container status \"8a819e203884e9ea332bde2bfafc37454bed52af45348f60aed872be6a35a675\": rpc error: code = NotFound desc = could not find container \"8a819e203884e9ea332bde2bfafc37454bed52af45348f60aed872be6a35a675\": container with ID starting with 8a819e203884e9ea332bde2bfafc37454bed52af45348f60aed872be6a35a675 not found: ID does not exist" Jan 29 09:24:11 crc kubenswrapper[4895]: I0129 09:24:11.212466 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:24:11 crc kubenswrapper[4895]: E0129 09:24:11.213307 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:24:11 crc kubenswrapper[4895]: I0129 09:24:11.232415 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="395a35ee-dcd1-4695-b398-ba6027f5b082" path="/var/lib/kubelet/pods/395a35ee-dcd1-4695-b398-ba6027f5b082/volumes" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.681824 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4qnmw"] Jan 29 09:24:12 crc kubenswrapper[4895]: E0129 09:24:12.682826 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395a35ee-dcd1-4695-b398-ba6027f5b082" containerName="registry-server" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.682843 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="395a35ee-dcd1-4695-b398-ba6027f5b082" containerName="registry-server" Jan 29 09:24:12 crc kubenswrapper[4895]: E0129 09:24:12.682868 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395a35ee-dcd1-4695-b398-ba6027f5b082" containerName="extract-utilities" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.682876 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="395a35ee-dcd1-4695-b398-ba6027f5b082" containerName="extract-utilities" Jan 29 09:24:12 crc kubenswrapper[4895]: E0129 09:24:12.682906 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395a35ee-dcd1-4695-b398-ba6027f5b082" containerName="extract-content" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.682929 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="395a35ee-dcd1-4695-b398-ba6027f5b082" containerName="extract-content" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.683130 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="395a35ee-dcd1-4695-b398-ba6027f5b082" containerName="registry-server" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.684776 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.713732 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qnmw"] Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.738985 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41f51d06-e523-4608-852f-9021f210c26a-utilities\") pod \"redhat-operators-4qnmw\" (UID: \"41f51d06-e523-4608-852f-9021f210c26a\") " pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.739095 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4dq8\" (UniqueName: \"kubernetes.io/projected/41f51d06-e523-4608-852f-9021f210c26a-kube-api-access-s4dq8\") pod \"redhat-operators-4qnmw\" (UID: \"41f51d06-e523-4608-852f-9021f210c26a\") " pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.739195 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41f51d06-e523-4608-852f-9021f210c26a-catalog-content\") pod \"redhat-operators-4qnmw\" (UID: \"41f51d06-e523-4608-852f-9021f210c26a\") " pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.849668 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41f51d06-e523-4608-852f-9021f210c26a-utilities\") pod \"redhat-operators-4qnmw\" (UID: \"41f51d06-e523-4608-852f-9021f210c26a\") " pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.849811 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4dq8\" (UniqueName: \"kubernetes.io/projected/41f51d06-e523-4608-852f-9021f210c26a-kube-api-access-s4dq8\") pod \"redhat-operators-4qnmw\" (UID: \"41f51d06-e523-4608-852f-9021f210c26a\") " pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.849992 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41f51d06-e523-4608-852f-9021f210c26a-catalog-content\") pod \"redhat-operators-4qnmw\" (UID: \"41f51d06-e523-4608-852f-9021f210c26a\") " pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.851479 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41f51d06-e523-4608-852f-9021f210c26a-catalog-content\") pod \"redhat-operators-4qnmw\" (UID: \"41f51d06-e523-4608-852f-9021f210c26a\") " pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.851753 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41f51d06-e523-4608-852f-9021f210c26a-utilities\") pod \"redhat-operators-4qnmw\" (UID: \"41f51d06-e523-4608-852f-9021f210c26a\") " pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:12 crc kubenswrapper[4895]: I0129 09:24:12.896008 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4dq8\" (UniqueName: \"kubernetes.io/projected/41f51d06-e523-4608-852f-9021f210c26a-kube-api-access-s4dq8\") pod \"redhat-operators-4qnmw\" (UID: \"41f51d06-e523-4608-852f-9021f210c26a\") " pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:13 crc kubenswrapper[4895]: I0129 09:24:13.021257 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:13 crc kubenswrapper[4895]: I0129 09:24:13.576710 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qnmw"] Jan 29 09:24:14 crc kubenswrapper[4895]: I0129 09:24:14.561623 4895 generic.go:334] "Generic (PLEG): container finished" podID="41f51d06-e523-4608-852f-9021f210c26a" containerID="00b1cafd84bf2830f6e2382496a2e6a0bdc821bc5e7d04ccaf319a001ec55f4b" exitCode=0 Jan 29 09:24:14 crc kubenswrapper[4895]: I0129 09:24:14.562108 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qnmw" event={"ID":"41f51d06-e523-4608-852f-9021f210c26a","Type":"ContainerDied","Data":"00b1cafd84bf2830f6e2382496a2e6a0bdc821bc5e7d04ccaf319a001ec55f4b"} Jan 29 09:24:14 crc kubenswrapper[4895]: I0129 09:24:14.562149 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qnmw" event={"ID":"41f51d06-e523-4608-852f-9021f210c26a","Type":"ContainerStarted","Data":"af36348eddd68a6d7404d9fa18db5d19e3864fdd5aa786e4e08fbabf374640d0"} Jan 29 09:24:16 crc kubenswrapper[4895]: I0129 09:24:16.590672 4895 generic.go:334] "Generic (PLEG): container finished" podID="41f51d06-e523-4608-852f-9021f210c26a" containerID="d14543b87f364184889d8db8ecc96f06a76f78e1b4cecfd58747949459629c6f" exitCode=0 Jan 29 09:24:16 crc kubenswrapper[4895]: I0129 09:24:16.590809 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qnmw" event={"ID":"41f51d06-e523-4608-852f-9021f210c26a","Type":"ContainerDied","Data":"d14543b87f364184889d8db8ecc96f06a76f78e1b4cecfd58747949459629c6f"} Jan 29 09:24:17 crc kubenswrapper[4895]: I0129 09:24:17.603411 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qnmw" event={"ID":"41f51d06-e523-4608-852f-9021f210c26a","Type":"ContainerStarted","Data":"e57c87cbba0f66e0e18b7bf956b6195484e9dc5483599034213fc1a90b185bda"} Jan 29 09:24:17 crc kubenswrapper[4895]: I0129 09:24:17.606242 4895 generic.go:334] "Generic (PLEG): container finished" podID="21de9a92-9e39-4f57-8a76-8ac5b9175d40" containerID="9c4b9afdea956dd38d2e7721dc00470bf6df0a4bb6f0e1aba5fcdaabbe6f9f8d" exitCode=0 Jan 29 09:24:17 crc kubenswrapper[4895]: I0129 09:24:17.606325 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwbt4/crc-debug-sh6zl" event={"ID":"21de9a92-9e39-4f57-8a76-8ac5b9175d40","Type":"ContainerDied","Data":"9c4b9afdea956dd38d2e7721dc00470bf6df0a4bb6f0e1aba5fcdaabbe6f9f8d"} Jan 29 09:24:17 crc kubenswrapper[4895]: I0129 09:24:17.628133 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4qnmw" podStartSLOduration=3.109017808 podStartE2EDuration="5.628100045s" podCreationTimestamp="2026-01-29 09:24:12 +0000 UTC" firstStartedPulling="2026-01-29 09:24:14.565766942 +0000 UTC m=+2596.207275088" lastFinishedPulling="2026-01-29 09:24:17.084849179 +0000 UTC m=+2598.726357325" observedRunningTime="2026-01-29 09:24:17.624284443 +0000 UTC m=+2599.265792589" watchObservedRunningTime="2026-01-29 09:24:17.628100045 +0000 UTC m=+2599.269608191" Jan 29 09:24:18 crc kubenswrapper[4895]: I0129 09:24:18.728664 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/crc-debug-sh6zl" Jan 29 09:24:18 crc kubenswrapper[4895]: I0129 09:24:18.782465 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vwbt4/crc-debug-sh6zl"] Jan 29 09:24:18 crc kubenswrapper[4895]: I0129 09:24:18.791277 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vwbt4/crc-debug-sh6zl"] Jan 29 09:24:18 crc kubenswrapper[4895]: I0129 09:24:18.792403 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21de9a92-9e39-4f57-8a76-8ac5b9175d40-host\") pod \"21de9a92-9e39-4f57-8a76-8ac5b9175d40\" (UID: \"21de9a92-9e39-4f57-8a76-8ac5b9175d40\") " Jan 29 09:24:18 crc kubenswrapper[4895]: I0129 09:24:18.792504 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwjz7\" (UniqueName: \"kubernetes.io/projected/21de9a92-9e39-4f57-8a76-8ac5b9175d40-kube-api-access-fwjz7\") pod \"21de9a92-9e39-4f57-8a76-8ac5b9175d40\" (UID: \"21de9a92-9e39-4f57-8a76-8ac5b9175d40\") " Jan 29 09:24:18 crc kubenswrapper[4895]: I0129 09:24:18.792512 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21de9a92-9e39-4f57-8a76-8ac5b9175d40-host" (OuterVolumeSpecName: "host") pod "21de9a92-9e39-4f57-8a76-8ac5b9175d40" (UID: "21de9a92-9e39-4f57-8a76-8ac5b9175d40"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:24:18 crc kubenswrapper[4895]: I0129 09:24:18.793135 4895 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21de9a92-9e39-4f57-8a76-8ac5b9175d40-host\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:18 crc kubenswrapper[4895]: I0129 09:24:18.799859 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21de9a92-9e39-4f57-8a76-8ac5b9175d40-kube-api-access-fwjz7" (OuterVolumeSpecName: "kube-api-access-fwjz7") pod "21de9a92-9e39-4f57-8a76-8ac5b9175d40" (UID: "21de9a92-9e39-4f57-8a76-8ac5b9175d40"). InnerVolumeSpecName "kube-api-access-fwjz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:18 crc kubenswrapper[4895]: I0129 09:24:18.895477 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwjz7\" (UniqueName: \"kubernetes.io/projected/21de9a92-9e39-4f57-8a76-8ac5b9175d40-kube-api-access-fwjz7\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:19 crc kubenswrapper[4895]: I0129 09:24:19.224887 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21de9a92-9e39-4f57-8a76-8ac5b9175d40" path="/var/lib/kubelet/pods/21de9a92-9e39-4f57-8a76-8ac5b9175d40/volumes" Jan 29 09:24:19 crc kubenswrapper[4895]: I0129 09:24:19.628155 4895 scope.go:117] "RemoveContainer" containerID="9c4b9afdea956dd38d2e7721dc00470bf6df0a4bb6f0e1aba5fcdaabbe6f9f8d" Jan 29 09:24:19 crc kubenswrapper[4895]: I0129 09:24:19.628171 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/crc-debug-sh6zl" Jan 29 09:24:20 crc kubenswrapper[4895]: I0129 09:24:20.083437 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vwbt4/crc-debug-fvtkf"] Jan 29 09:24:20 crc kubenswrapper[4895]: E0129 09:24:20.084196 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21de9a92-9e39-4f57-8a76-8ac5b9175d40" containerName="container-00" Jan 29 09:24:20 crc kubenswrapper[4895]: I0129 09:24:20.084214 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="21de9a92-9e39-4f57-8a76-8ac5b9175d40" containerName="container-00" Jan 29 09:24:20 crc kubenswrapper[4895]: I0129 09:24:20.084444 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="21de9a92-9e39-4f57-8a76-8ac5b9175d40" containerName="container-00" Jan 29 09:24:20 crc kubenswrapper[4895]: I0129 09:24:20.085316 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/crc-debug-fvtkf" Jan 29 09:24:20 crc kubenswrapper[4895]: I0129 09:24:20.124661 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6p99\" (UniqueName: \"kubernetes.io/projected/ea8e5e11-bb64-4e66-a1d3-28d9b2380a97-kube-api-access-b6p99\") pod \"crc-debug-fvtkf\" (UID: \"ea8e5e11-bb64-4e66-a1d3-28d9b2380a97\") " pod="openshift-must-gather-vwbt4/crc-debug-fvtkf" Jan 29 09:24:20 crc kubenswrapper[4895]: I0129 09:24:20.125382 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ea8e5e11-bb64-4e66-a1d3-28d9b2380a97-host\") pod \"crc-debug-fvtkf\" (UID: \"ea8e5e11-bb64-4e66-a1d3-28d9b2380a97\") " pod="openshift-must-gather-vwbt4/crc-debug-fvtkf" Jan 29 09:24:20 crc kubenswrapper[4895]: I0129 09:24:20.227636 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ea8e5e11-bb64-4e66-a1d3-28d9b2380a97-host\") pod \"crc-debug-fvtkf\" (UID: \"ea8e5e11-bb64-4e66-a1d3-28d9b2380a97\") " pod="openshift-must-gather-vwbt4/crc-debug-fvtkf" Jan 29 09:24:20 crc kubenswrapper[4895]: I0129 09:24:20.227803 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6p99\" (UniqueName: \"kubernetes.io/projected/ea8e5e11-bb64-4e66-a1d3-28d9b2380a97-kube-api-access-b6p99\") pod \"crc-debug-fvtkf\" (UID: \"ea8e5e11-bb64-4e66-a1d3-28d9b2380a97\") " pod="openshift-must-gather-vwbt4/crc-debug-fvtkf" Jan 29 09:24:20 crc kubenswrapper[4895]: I0129 09:24:20.227861 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ea8e5e11-bb64-4e66-a1d3-28d9b2380a97-host\") pod \"crc-debug-fvtkf\" (UID: \"ea8e5e11-bb64-4e66-a1d3-28d9b2380a97\") " pod="openshift-must-gather-vwbt4/crc-debug-fvtkf" Jan 29 09:24:20 crc kubenswrapper[4895]: I0129 09:24:20.267958 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6p99\" (UniqueName: \"kubernetes.io/projected/ea8e5e11-bb64-4e66-a1d3-28d9b2380a97-kube-api-access-b6p99\") pod \"crc-debug-fvtkf\" (UID: \"ea8e5e11-bb64-4e66-a1d3-28d9b2380a97\") " pod="openshift-must-gather-vwbt4/crc-debug-fvtkf" Jan 29 09:24:20 crc kubenswrapper[4895]: I0129 09:24:20.404955 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/crc-debug-fvtkf" Jan 29 09:24:20 crc kubenswrapper[4895]: I0129 09:24:20.641308 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwbt4/crc-debug-fvtkf" event={"ID":"ea8e5e11-bb64-4e66-a1d3-28d9b2380a97","Type":"ContainerStarted","Data":"2bf8b86c3614119826a90ac66519de5894abf92d061e01aaf353e7e1ecc7e127"} Jan 29 09:24:21 crc kubenswrapper[4895]: I0129 09:24:21.657468 4895 generic.go:334] "Generic (PLEG): container finished" podID="ea8e5e11-bb64-4e66-a1d3-28d9b2380a97" containerID="1d48c3d412e801cc7fb4496a2d8223487fa4c5d4413babc209dc722197f80ecf" exitCode=0 Jan 29 09:24:21 crc kubenswrapper[4895]: I0129 09:24:21.657556 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwbt4/crc-debug-fvtkf" event={"ID":"ea8e5e11-bb64-4e66-a1d3-28d9b2380a97","Type":"ContainerDied","Data":"1d48c3d412e801cc7fb4496a2d8223487fa4c5d4413babc209dc722197f80ecf"} Jan 29 09:24:22 crc kubenswrapper[4895]: I0129 09:24:22.087221 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vwbt4/crc-debug-fvtkf"] Jan 29 09:24:22 crc kubenswrapper[4895]: I0129 09:24:22.097299 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vwbt4/crc-debug-fvtkf"] Jan 29 09:24:22 crc kubenswrapper[4895]: I0129 09:24:22.784040 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/crc-debug-fvtkf" Jan 29 09:24:22 crc kubenswrapper[4895]: I0129 09:24:22.896881 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ea8e5e11-bb64-4e66-a1d3-28d9b2380a97-host\") pod \"ea8e5e11-bb64-4e66-a1d3-28d9b2380a97\" (UID: \"ea8e5e11-bb64-4e66-a1d3-28d9b2380a97\") " Jan 29 09:24:22 crc kubenswrapper[4895]: I0129 09:24:22.897320 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6p99\" (UniqueName: \"kubernetes.io/projected/ea8e5e11-bb64-4e66-a1d3-28d9b2380a97-kube-api-access-b6p99\") pod \"ea8e5e11-bb64-4e66-a1d3-28d9b2380a97\" (UID: \"ea8e5e11-bb64-4e66-a1d3-28d9b2380a97\") " Jan 29 09:24:22 crc kubenswrapper[4895]: I0129 09:24:22.897063 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea8e5e11-bb64-4e66-a1d3-28d9b2380a97-host" (OuterVolumeSpecName: "host") pod "ea8e5e11-bb64-4e66-a1d3-28d9b2380a97" (UID: "ea8e5e11-bb64-4e66-a1d3-28d9b2380a97"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:24:22 crc kubenswrapper[4895]: I0129 09:24:22.898473 4895 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ea8e5e11-bb64-4e66-a1d3-28d9b2380a97-host\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:22 crc kubenswrapper[4895]: I0129 09:24:22.905211 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea8e5e11-bb64-4e66-a1d3-28d9b2380a97-kube-api-access-b6p99" (OuterVolumeSpecName: "kube-api-access-b6p99") pod "ea8e5e11-bb64-4e66-a1d3-28d9b2380a97" (UID: "ea8e5e11-bb64-4e66-a1d3-28d9b2380a97"). InnerVolumeSpecName "kube-api-access-b6p99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.000512 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6p99\" (UniqueName: \"kubernetes.io/projected/ea8e5e11-bb64-4e66-a1d3-28d9b2380a97-kube-api-access-b6p99\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.022289 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.022444 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.230230 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea8e5e11-bb64-4e66-a1d3-28d9b2380a97" path="/var/lib/kubelet/pods/ea8e5e11-bb64-4e66-a1d3-28d9b2380a97/volumes" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.339446 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vwbt4/crc-debug-gs6cd"] Jan 29 09:24:23 crc kubenswrapper[4895]: E0129 09:24:23.340377 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8e5e11-bb64-4e66-a1d3-28d9b2380a97" containerName="container-00" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.340503 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8e5e11-bb64-4e66-a1d3-28d9b2380a97" containerName="container-00" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.340862 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea8e5e11-bb64-4e66-a1d3-28d9b2380a97" containerName="container-00" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.341847 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/crc-debug-gs6cd" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.409366 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qmxw\" (UniqueName: \"kubernetes.io/projected/da4fc6e3-cebd-4ed2-a002-4278d4400d1a-kube-api-access-8qmxw\") pod \"crc-debug-gs6cd\" (UID: \"da4fc6e3-cebd-4ed2-a002-4278d4400d1a\") " pod="openshift-must-gather-vwbt4/crc-debug-gs6cd" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.410108 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/da4fc6e3-cebd-4ed2-a002-4278d4400d1a-host\") pod \"crc-debug-gs6cd\" (UID: \"da4fc6e3-cebd-4ed2-a002-4278d4400d1a\") " pod="openshift-must-gather-vwbt4/crc-debug-gs6cd" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.512453 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qmxw\" (UniqueName: \"kubernetes.io/projected/da4fc6e3-cebd-4ed2-a002-4278d4400d1a-kube-api-access-8qmxw\") pod \"crc-debug-gs6cd\" (UID: \"da4fc6e3-cebd-4ed2-a002-4278d4400d1a\") " pod="openshift-must-gather-vwbt4/crc-debug-gs6cd" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.512560 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/da4fc6e3-cebd-4ed2-a002-4278d4400d1a-host\") pod \"crc-debug-gs6cd\" (UID: \"da4fc6e3-cebd-4ed2-a002-4278d4400d1a\") " pod="openshift-must-gather-vwbt4/crc-debug-gs6cd" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.512716 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/da4fc6e3-cebd-4ed2-a002-4278d4400d1a-host\") pod \"crc-debug-gs6cd\" (UID: \"da4fc6e3-cebd-4ed2-a002-4278d4400d1a\") " pod="openshift-must-gather-vwbt4/crc-debug-gs6cd" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.532749 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qmxw\" (UniqueName: \"kubernetes.io/projected/da4fc6e3-cebd-4ed2-a002-4278d4400d1a-kube-api-access-8qmxw\") pod \"crc-debug-gs6cd\" (UID: \"da4fc6e3-cebd-4ed2-a002-4278d4400d1a\") " pod="openshift-must-gather-vwbt4/crc-debug-gs6cd" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.660729 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/crc-debug-gs6cd" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.677192 4895 scope.go:117] "RemoveContainer" containerID="1d48c3d412e801cc7fb4496a2d8223487fa4c5d4413babc209dc722197f80ecf" Jan 29 09:24:23 crc kubenswrapper[4895]: I0129 09:24:23.677201 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/crc-debug-fvtkf" Jan 29 09:24:23 crc kubenswrapper[4895]: W0129 09:24:23.694191 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda4fc6e3_cebd_4ed2_a002_4278d4400d1a.slice/crio-4d6403f8fe2be1e3a407e69c347cdd41c196b2fdba5b7909003a6556866762ce WatchSource:0}: Error finding container 4d6403f8fe2be1e3a407e69c347cdd41c196b2fdba5b7909003a6556866762ce: Status 404 returned error can't find the container with id 4d6403f8fe2be1e3a407e69c347cdd41c196b2fdba5b7909003a6556866762ce Jan 29 09:24:24 crc kubenswrapper[4895]: I0129 09:24:24.088198 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4qnmw" podUID="41f51d06-e523-4608-852f-9021f210c26a" containerName="registry-server" probeResult="failure" output=< Jan 29 09:24:24 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 09:24:24 crc kubenswrapper[4895]: > Jan 29 09:24:24 crc kubenswrapper[4895]: I0129 09:24:24.689260 4895 generic.go:334] "Generic (PLEG): container finished" podID="da4fc6e3-cebd-4ed2-a002-4278d4400d1a" containerID="f9fe41a652b3934e5b522a4a0e9ec85285059bf42b6f9ff8ffbf0196880e9d2d" exitCode=0 Jan 29 09:24:24 crc kubenswrapper[4895]: I0129 09:24:24.689517 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwbt4/crc-debug-gs6cd" event={"ID":"da4fc6e3-cebd-4ed2-a002-4278d4400d1a","Type":"ContainerDied","Data":"f9fe41a652b3934e5b522a4a0e9ec85285059bf42b6f9ff8ffbf0196880e9d2d"} Jan 29 09:24:24 crc kubenswrapper[4895]: I0129 09:24:24.689844 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwbt4/crc-debug-gs6cd" event={"ID":"da4fc6e3-cebd-4ed2-a002-4278d4400d1a","Type":"ContainerStarted","Data":"4d6403f8fe2be1e3a407e69c347cdd41c196b2fdba5b7909003a6556866762ce"} Jan 29 09:24:24 crc kubenswrapper[4895]: I0129 09:24:24.767498 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vwbt4/crc-debug-gs6cd"] Jan 29 09:24:24 crc kubenswrapper[4895]: I0129 09:24:24.779886 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vwbt4/crc-debug-gs6cd"] Jan 29 09:24:25 crc kubenswrapper[4895]: I0129 09:24:25.212344 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:24:25 crc kubenswrapper[4895]: I0129 09:24:25.705048 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerStarted","Data":"43d7dd8fbfd238a054c8f2b2f0b751543be50bb5b4e90095835c26e89f22f3db"} Jan 29 09:24:25 crc kubenswrapper[4895]: I0129 09:24:25.836699 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/crc-debug-gs6cd" Jan 29 09:24:25 crc kubenswrapper[4895]: I0129 09:24:25.878727 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/da4fc6e3-cebd-4ed2-a002-4278d4400d1a-host\") pod \"da4fc6e3-cebd-4ed2-a002-4278d4400d1a\" (UID: \"da4fc6e3-cebd-4ed2-a002-4278d4400d1a\") " Jan 29 09:24:25 crc kubenswrapper[4895]: I0129 09:24:25.878804 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qmxw\" (UniqueName: \"kubernetes.io/projected/da4fc6e3-cebd-4ed2-a002-4278d4400d1a-kube-api-access-8qmxw\") pod \"da4fc6e3-cebd-4ed2-a002-4278d4400d1a\" (UID: \"da4fc6e3-cebd-4ed2-a002-4278d4400d1a\") " Jan 29 09:24:25 crc kubenswrapper[4895]: I0129 09:24:25.878954 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da4fc6e3-cebd-4ed2-a002-4278d4400d1a-host" (OuterVolumeSpecName: "host") pod "da4fc6e3-cebd-4ed2-a002-4278d4400d1a" (UID: "da4fc6e3-cebd-4ed2-a002-4278d4400d1a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:24:25 crc kubenswrapper[4895]: I0129 09:24:25.879296 4895 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/da4fc6e3-cebd-4ed2-a002-4278d4400d1a-host\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:25 crc kubenswrapper[4895]: I0129 09:24:25.896425 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da4fc6e3-cebd-4ed2-a002-4278d4400d1a-kube-api-access-8qmxw" (OuterVolumeSpecName: "kube-api-access-8qmxw") pod "da4fc6e3-cebd-4ed2-a002-4278d4400d1a" (UID: "da4fc6e3-cebd-4ed2-a002-4278d4400d1a"). InnerVolumeSpecName "kube-api-access-8qmxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:25 crc kubenswrapper[4895]: I0129 09:24:25.981682 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qmxw\" (UniqueName: \"kubernetes.io/projected/da4fc6e3-cebd-4ed2-a002-4278d4400d1a-kube-api-access-8qmxw\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:26 crc kubenswrapper[4895]: I0129 09:24:26.714745 4895 scope.go:117] "RemoveContainer" containerID="f9fe41a652b3934e5b522a4a0e9ec85285059bf42b6f9ff8ffbf0196880e9d2d" Jan 29 09:24:26 crc kubenswrapper[4895]: I0129 09:24:26.714787 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/crc-debug-gs6cd" Jan 29 09:24:27 crc kubenswrapper[4895]: I0129 09:24:27.229151 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da4fc6e3-cebd-4ed2-a002-4278d4400d1a" path="/var/lib/kubelet/pods/da4fc6e3-cebd-4ed2-a002-4278d4400d1a/volumes" Jan 29 09:24:33 crc kubenswrapper[4895]: I0129 09:24:33.075537 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:33 crc kubenswrapper[4895]: I0129 09:24:33.129936 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:33 crc kubenswrapper[4895]: I0129 09:24:33.326208 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qnmw"] Jan 29 09:24:34 crc kubenswrapper[4895]: I0129 09:24:34.799296 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4qnmw" podUID="41f51d06-e523-4608-852f-9021f210c26a" containerName="registry-server" containerID="cri-o://e57c87cbba0f66e0e18b7bf956b6195484e9dc5483599034213fc1a90b185bda" gracePeriod=2 Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.374379 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.403800 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4dq8\" (UniqueName: \"kubernetes.io/projected/41f51d06-e523-4608-852f-9021f210c26a-kube-api-access-s4dq8\") pod \"41f51d06-e523-4608-852f-9021f210c26a\" (UID: \"41f51d06-e523-4608-852f-9021f210c26a\") " Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.404145 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41f51d06-e523-4608-852f-9021f210c26a-catalog-content\") pod \"41f51d06-e523-4608-852f-9021f210c26a\" (UID: \"41f51d06-e523-4608-852f-9021f210c26a\") " Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.404316 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41f51d06-e523-4608-852f-9021f210c26a-utilities\") pod \"41f51d06-e523-4608-852f-9021f210c26a\" (UID: \"41f51d06-e523-4608-852f-9021f210c26a\") " Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.405699 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41f51d06-e523-4608-852f-9021f210c26a-utilities" (OuterVolumeSpecName: "utilities") pod "41f51d06-e523-4608-852f-9021f210c26a" (UID: "41f51d06-e523-4608-852f-9021f210c26a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.414165 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f51d06-e523-4608-852f-9021f210c26a-kube-api-access-s4dq8" (OuterVolumeSpecName: "kube-api-access-s4dq8") pod "41f51d06-e523-4608-852f-9021f210c26a" (UID: "41f51d06-e523-4608-852f-9021f210c26a"). InnerVolumeSpecName "kube-api-access-s4dq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.507470 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41f51d06-e523-4608-852f-9021f210c26a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.507513 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4dq8\" (UniqueName: \"kubernetes.io/projected/41f51d06-e523-4608-852f-9021f210c26a-kube-api-access-s4dq8\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.534380 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41f51d06-e523-4608-852f-9021f210c26a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41f51d06-e523-4608-852f-9021f210c26a" (UID: "41f51d06-e523-4608-852f-9021f210c26a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.610099 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41f51d06-e523-4608-852f-9021f210c26a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.829154 4895 generic.go:334] "Generic (PLEG): container finished" podID="41f51d06-e523-4608-852f-9021f210c26a" containerID="e57c87cbba0f66e0e18b7bf956b6195484e9dc5483599034213fc1a90b185bda" exitCode=0 Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.829208 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qnmw" event={"ID":"41f51d06-e523-4608-852f-9021f210c26a","Type":"ContainerDied","Data":"e57c87cbba0f66e0e18b7bf956b6195484e9dc5483599034213fc1a90b185bda"} Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.829242 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qnmw" event={"ID":"41f51d06-e523-4608-852f-9021f210c26a","Type":"ContainerDied","Data":"af36348eddd68a6d7404d9fa18db5d19e3864fdd5aa786e4e08fbabf374640d0"} Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.829263 4895 scope.go:117] "RemoveContainer" containerID="e57c87cbba0f66e0e18b7bf956b6195484e9dc5483599034213fc1a90b185bda" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.829452 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qnmw" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.869103 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qnmw"] Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.872262 4895 scope.go:117] "RemoveContainer" containerID="d14543b87f364184889d8db8ecc96f06a76f78e1b4cecfd58747949459629c6f" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.883632 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4qnmw"] Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.906687 4895 scope.go:117] "RemoveContainer" containerID="00b1cafd84bf2830f6e2382496a2e6a0bdc821bc5e7d04ccaf319a001ec55f4b" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.943659 4895 scope.go:117] "RemoveContainer" containerID="e57c87cbba0f66e0e18b7bf956b6195484e9dc5483599034213fc1a90b185bda" Jan 29 09:24:35 crc kubenswrapper[4895]: E0129 09:24:35.944306 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e57c87cbba0f66e0e18b7bf956b6195484e9dc5483599034213fc1a90b185bda\": container with ID starting with e57c87cbba0f66e0e18b7bf956b6195484e9dc5483599034213fc1a90b185bda not found: ID does not exist" containerID="e57c87cbba0f66e0e18b7bf956b6195484e9dc5483599034213fc1a90b185bda" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.944383 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e57c87cbba0f66e0e18b7bf956b6195484e9dc5483599034213fc1a90b185bda"} err="failed to get container status \"e57c87cbba0f66e0e18b7bf956b6195484e9dc5483599034213fc1a90b185bda\": rpc error: code = NotFound desc = could not find container \"e57c87cbba0f66e0e18b7bf956b6195484e9dc5483599034213fc1a90b185bda\": container with ID starting with e57c87cbba0f66e0e18b7bf956b6195484e9dc5483599034213fc1a90b185bda not found: ID does not exist" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.944419 4895 scope.go:117] "RemoveContainer" containerID="d14543b87f364184889d8db8ecc96f06a76f78e1b4cecfd58747949459629c6f" Jan 29 09:24:35 crc kubenswrapper[4895]: E0129 09:24:35.947822 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d14543b87f364184889d8db8ecc96f06a76f78e1b4cecfd58747949459629c6f\": container with ID starting with d14543b87f364184889d8db8ecc96f06a76f78e1b4cecfd58747949459629c6f not found: ID does not exist" containerID="d14543b87f364184889d8db8ecc96f06a76f78e1b4cecfd58747949459629c6f" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.947898 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d14543b87f364184889d8db8ecc96f06a76f78e1b4cecfd58747949459629c6f"} err="failed to get container status \"d14543b87f364184889d8db8ecc96f06a76f78e1b4cecfd58747949459629c6f\": rpc error: code = NotFound desc = could not find container \"d14543b87f364184889d8db8ecc96f06a76f78e1b4cecfd58747949459629c6f\": container with ID starting with d14543b87f364184889d8db8ecc96f06a76f78e1b4cecfd58747949459629c6f not found: ID does not exist" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.947973 4895 scope.go:117] "RemoveContainer" containerID="00b1cafd84bf2830f6e2382496a2e6a0bdc821bc5e7d04ccaf319a001ec55f4b" Jan 29 09:24:35 crc kubenswrapper[4895]: E0129 09:24:35.948701 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00b1cafd84bf2830f6e2382496a2e6a0bdc821bc5e7d04ccaf319a001ec55f4b\": container with ID starting with 00b1cafd84bf2830f6e2382496a2e6a0bdc821bc5e7d04ccaf319a001ec55f4b not found: ID does not exist" containerID="00b1cafd84bf2830f6e2382496a2e6a0bdc821bc5e7d04ccaf319a001ec55f4b" Jan 29 09:24:35 crc kubenswrapper[4895]: I0129 09:24:35.948745 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00b1cafd84bf2830f6e2382496a2e6a0bdc821bc5e7d04ccaf319a001ec55f4b"} err="failed to get container status \"00b1cafd84bf2830f6e2382496a2e6a0bdc821bc5e7d04ccaf319a001ec55f4b\": rpc error: code = NotFound desc = could not find container \"00b1cafd84bf2830f6e2382496a2e6a0bdc821bc5e7d04ccaf319a001ec55f4b\": container with ID starting with 00b1cafd84bf2830f6e2382496a2e6a0bdc821bc5e7d04ccaf319a001ec55f4b not found: ID does not exist" Jan 29 09:24:37 crc kubenswrapper[4895]: I0129 09:24:37.227867 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41f51d06-e523-4608-852f-9021f210c26a" path="/var/lib/kubelet/pods/41f51d06-e523-4608-852f-9021f210c26a/volumes" Jan 29 09:24:58 crc kubenswrapper[4895]: I0129 09:24:58.802432 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5799c46566-89j6v_dcb59826-4f95-4127-b7fe-f32cd95cad8e/barbican-api/0.log" Jan 29 09:24:58 crc kubenswrapper[4895]: I0129 09:24:58.927325 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5799c46566-89j6v_dcb59826-4f95-4127-b7fe-f32cd95cad8e/barbican-api-log/0.log" Jan 29 09:24:59 crc kubenswrapper[4895]: I0129 09:24:59.051959 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-bb49b7794-577rp_c1d9162f-7759-46d6-bea9-a9975470a1d9/barbican-keystone-listener/0.log" Jan 29 09:24:59 crc kubenswrapper[4895]: I0129 09:24:59.271375 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-bb49b7794-577rp_c1d9162f-7759-46d6-bea9-a9975470a1d9/barbican-keystone-listener-log/0.log" Jan 29 09:24:59 crc kubenswrapper[4895]: I0129 09:24:59.336575 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-555f79d94f-q55hl_c403270a-6868-4dec-8340-ac3237f9028e/barbican-worker-log/0.log" Jan 29 09:24:59 crc kubenswrapper[4895]: I0129 09:24:59.360672 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-555f79d94f-q55hl_c403270a-6868-4dec-8340-ac3237f9028e/barbican-worker/0.log" Jan 29 09:24:59 crc kubenswrapper[4895]: I0129 09:24:59.499796 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6dced459-73d7-4079-8450-1d22972197c0/ceilometer-central-agent/0.log" Jan 29 09:24:59 crc kubenswrapper[4895]: I0129 09:24:59.544383 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6dced459-73d7-4079-8450-1d22972197c0/ceilometer-notification-agent/0.log" Jan 29 09:24:59 crc kubenswrapper[4895]: I0129 09:24:59.578319 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6dced459-73d7-4079-8450-1d22972197c0/proxy-httpd/0.log" Jan 29 09:24:59 crc kubenswrapper[4895]: I0129 09:24:59.669716 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6dced459-73d7-4079-8450-1d22972197c0/sg-core/0.log" Jan 29 09:24:59 crc kubenswrapper[4895]: I0129 09:24:59.769277 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2e4960bc-f10d-48c0-835d-9616ae852ec8/cinder-api/0.log" Jan 29 09:24:59 crc kubenswrapper[4895]: I0129 09:24:59.820378 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2e4960bc-f10d-48c0-835d-9616ae852ec8/cinder-api-log/0.log" Jan 29 09:25:00 crc kubenswrapper[4895]: I0129 09:25:00.054356 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19/probe/0.log" Jan 29 09:25:00 crc kubenswrapper[4895]: I0129 09:25:00.055047 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_96fa6b3a-137a-449a-9e6f-8c1b7a4f5d19/cinder-scheduler/0.log" Jan 29 09:25:00 crc kubenswrapper[4895]: I0129 09:25:00.182040 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-82zv8_d9878c3e-4959-4f63-bfc3-899f9a55eee2/init/0.log" Jan 29 09:25:00 crc kubenswrapper[4895]: I0129 09:25:00.400547 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-82zv8_d9878c3e-4959-4f63-bfc3-899f9a55eee2/init/0.log" Jan 29 09:25:00 crc kubenswrapper[4895]: I0129 09:25:00.470478 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cd318cba-9380-4676-bb83-3256c9c5adf5/glance-httpd/0.log" Jan 29 09:25:00 crc kubenswrapper[4895]: I0129 09:25:00.488950 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-82zv8_d9878c3e-4959-4f63-bfc3-899f9a55eee2/dnsmasq-dns/0.log" Jan 29 09:25:00 crc kubenswrapper[4895]: I0129 09:25:00.647424 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cd318cba-9380-4676-bb83-3256c9c5adf5/glance-log/0.log" Jan 29 09:25:00 crc kubenswrapper[4895]: I0129 09:25:00.741235 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_dbcc1d5c-0822-492b-98ce-667e0f13d497/glance-httpd/0.log" Jan 29 09:25:00 crc kubenswrapper[4895]: I0129 09:25:00.748803 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_dbcc1d5c-0822-492b-98ce-667e0f13d497/glance-log/0.log" Jan 29 09:25:00 crc kubenswrapper[4895]: I0129 09:25:00.952140 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-7f7db74854-hkzkt_5105e55b-cea6-4b20-bf0a-f7f0410f8aa9/init/0.log" Jan 29 09:25:01 crc kubenswrapper[4895]: I0129 09:25:01.150583 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-7f7db74854-hkzkt_5105e55b-cea6-4b20-bf0a-f7f0410f8aa9/ironic-api-log/0.log" Jan 29 09:25:01 crc kubenswrapper[4895]: I0129 09:25:01.156427 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-7f7db74854-hkzkt_5105e55b-cea6-4b20-bf0a-f7f0410f8aa9/init/0.log" Jan 29 09:25:01 crc kubenswrapper[4895]: I0129 09:25:01.234845 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-7f7db74854-hkzkt_5105e55b-cea6-4b20-bf0a-f7f0410f8aa9/ironic-api/0.log" Jan 29 09:25:01 crc kubenswrapper[4895]: I0129 09:25:01.370664 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/init/0.log" Jan 29 09:25:01 crc kubenswrapper[4895]: I0129 09:25:01.618739 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/ironic-python-agent-init/0.log" Jan 29 09:25:01 crc kubenswrapper[4895]: I0129 09:25:01.687607 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/ironic-python-agent-init/0.log" Jan 29 09:25:01 crc kubenswrapper[4895]: I0129 09:25:01.710609 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/init/0.log" Jan 29 09:25:01 crc kubenswrapper[4895]: I0129 09:25:01.976512 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/ironic-python-agent-init/0.log" Jan 29 09:25:01 crc kubenswrapper[4895]: I0129 09:25:01.992738 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/init/0.log" Jan 29 09:25:02 crc kubenswrapper[4895]: I0129 09:25:02.453179 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/init/0.log" Jan 29 09:25:02 crc kubenswrapper[4895]: I0129 09:25:02.578204 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/ironic-python-agent-init/0.log" Jan 29 09:25:02 crc kubenswrapper[4895]: I0129 09:25:02.711799 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/pxe-init/0.log" Jan 29 09:25:02 crc kubenswrapper[4895]: I0129 09:25:02.820325 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/httpboot/0.log" Jan 29 09:25:03 crc kubenswrapper[4895]: I0129 09:25:03.034797 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/ramdisk-logs/0.log" Jan 29 09:25:03 crc kubenswrapper[4895]: I0129 09:25:03.061023 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/ironic-conductor/0.log" Jan 29 09:25:03 crc kubenswrapper[4895]: I0129 09:25:03.312213 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-4mm74_03042a97-0311-4d0c-9878-380987ec9407/init/0.log" Jan 29 09:25:03 crc kubenswrapper[4895]: I0129 09:25:03.464534 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/pxe-init/0.log" Jan 29 09:25:03 crc kubenswrapper[4895]: I0129 09:25:03.547682 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-4mm74_03042a97-0311-4d0c-9878-380987ec9407/ironic-db-sync/0.log" Jan 29 09:25:03 crc kubenswrapper[4895]: I0129 09:25:03.553263 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-4mm74_03042a97-0311-4d0c-9878-380987ec9407/init/0.log" Jan 29 09:25:03 crc kubenswrapper[4895]: I0129 09:25:03.632294 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/pxe-init/0.log" Jan 29 09:25:03 crc kubenswrapper[4895]: I0129 09:25:03.792552 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/ironic-python-agent-init/0.log" Jan 29 09:25:03 crc kubenswrapper[4895]: I0129 09:25:03.834228 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_f893b3e3-3833-4a94-ab55-951f600fdadd/pxe-init/0.log" Jan 29 09:25:03 crc kubenswrapper[4895]: I0129 09:25:03.976640 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/inspector-pxe-init/0.log" Jan 29 09:25:04 crc kubenswrapper[4895]: I0129 09:25:04.015631 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/inspector-pxe-init/0.log" Jan 29 09:25:04 crc kubenswrapper[4895]: I0129 09:25:04.031011 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/ironic-python-agent-init/0.log" Jan 29 09:25:04 crc kubenswrapper[4895]: I0129 09:25:04.207387 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/inspector-httpboot/0.log" Jan 29 09:25:04 crc kubenswrapper[4895]: I0129 09:25:04.225308 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/ironic-python-agent-init/0.log" Jan 29 09:25:04 crc kubenswrapper[4895]: I0129 09:25:04.231138 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/inspector-pxe-init/0.log" Jan 29 09:25:04 crc kubenswrapper[4895]: I0129 09:25:04.290282 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/ironic-inspector/2.log" Jan 29 09:25:04 crc kubenswrapper[4895]: I0129 09:25:04.310160 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/ironic-inspector/1.log" Jan 29 09:25:04 crc kubenswrapper[4895]: I0129 09:25:04.488092 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/ironic-inspector-httpd/0.log" Jan 29 09:25:04 crc kubenswrapper[4895]: I0129 09:25:04.552369 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-sync-v7z9b_1318c5c6-26bf-46e6-aba5-ab4e024be588/ironic-inspector-db-sync/0.log" Jan 29 09:25:04 crc kubenswrapper[4895]: I0129 09:25:04.569617 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_9a9f2123-8dc5-46d6-81ae-802f6e92c3a8/ramdisk-logs/0.log" Jan 29 09:25:04 crc kubenswrapper[4895]: I0129 09:25:04.748088 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-78c59f886f-xtrfg_844ab9b8-4b72-401d-b008-db11605452a8/ironic-neutron-agent/2.log" Jan 29 09:25:04 crc kubenswrapper[4895]: I0129 09:25:04.866624 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-78c59f886f-xtrfg_844ab9b8-4b72-401d-b008-db11605452a8/ironic-neutron-agent/1.log" Jan 29 09:25:04 crc kubenswrapper[4895]: I0129 09:25:04.968902 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5f75d78756-glzhf_406e8af5-68c1-48c3-b377-68d3f60c10a9/keystone-api/0.log" Jan 29 09:25:05 crc kubenswrapper[4895]: I0129 09:25:05.029499 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_75059205-4797-4975-98d8-bcbf919748ba/kube-state-metrics/0.log" Jan 29 09:25:05 crc kubenswrapper[4895]: I0129 09:25:05.426219 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6b548b4f8c-kc92t_657c2688-8379-4121-a64a-89c1fd428b57/neutron-api/0.log" Jan 29 09:25:05 crc kubenswrapper[4895]: I0129 09:25:05.446473 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6b548b4f8c-kc92t_657c2688-8379-4121-a64a-89c1fd428b57/neutron-httpd/0.log" Jan 29 09:25:05 crc kubenswrapper[4895]: I0129 09:25:05.830699 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_558cbc7f-9455-49b5-89aa-b898d468ca08/nova-api-log/0.log" Jan 29 09:25:06 crc kubenswrapper[4895]: I0129 09:25:06.014136 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_558cbc7f-9455-49b5-89aa-b898d468ca08/nova-api-api/0.log" Jan 29 09:25:06 crc kubenswrapper[4895]: I0129 09:25:06.036171 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_633d9018-c7c7-420f-9b03-6c983a5c40b4/nova-cell0-conductor-conductor/0.log" Jan 29 09:25:06 crc kubenswrapper[4895]: I0129 09:25:06.321415 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4653a20c-bef0-463a-962d-f1f17b2011e3/nova-cell1-conductor-conductor/0.log" Jan 29 09:25:06 crc kubenswrapper[4895]: I0129 09:25:06.451955 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_212553fe-f689-4d32-9368-e1f5a6a9654d/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 09:25:06 crc kubenswrapper[4895]: I0129 09:25:06.566562 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a2e13290-5cda-49ac-9efd-5e8a72da76b6/nova-metadata-log/0.log" Jan 29 09:25:06 crc kubenswrapper[4895]: I0129 09:25:06.860374 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_205e527c-d0a7-4b85-9542-19a871c61693/mysql-bootstrap/0.log" Jan 29 09:25:06 crc kubenswrapper[4895]: I0129 09:25:06.900178 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_216aa652-e284-4fb8-90bf-d975cc19d1f0/nova-scheduler-scheduler/0.log" Jan 29 09:25:07 crc kubenswrapper[4895]: I0129 09:25:07.119149 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_205e527c-d0a7-4b85-9542-19a871c61693/mysql-bootstrap/0.log" Jan 29 09:25:07 crc kubenswrapper[4895]: I0129 09:25:07.143291 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_205e527c-d0a7-4b85-9542-19a871c61693/galera/0.log" Jan 29 09:25:07 crc kubenswrapper[4895]: I0129 09:25:07.315888 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a2e13290-5cda-49ac-9efd-5e8a72da76b6/nova-metadata-metadata/0.log" Jan 29 09:25:07 crc kubenswrapper[4895]: I0129 09:25:07.457536 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94/mysql-bootstrap/0.log" Jan 29 09:25:07 crc kubenswrapper[4895]: I0129 09:25:07.612715 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94/galera/0.log" Jan 29 09:25:07 crc kubenswrapper[4895]: I0129 09:25:07.636891 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_996b2ef7-6f00-4cbf-b8b7-4d9bb3360c94/mysql-bootstrap/0.log" Jan 29 09:25:07 crc kubenswrapper[4895]: I0129 09:25:07.718481 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f10bf685-c7de-4126-afc5-6bd68c3e8845/openstackclient/0.log" Jan 29 09:25:07 crc kubenswrapper[4895]: I0129 09:25:07.926364 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lc26n_50cc7d34-44f8-490c-a18c-2d747721d20a/openstack-network-exporter/0.log" Jan 29 09:25:08 crc kubenswrapper[4895]: I0129 09:25:08.018562 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mjz6w_5f71eedb-46ac-474f-9d1e-d4909a49e05b/ovn-controller/0.log" Jan 29 09:25:08 crc kubenswrapper[4895]: I0129 09:25:08.272412 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rzm2l_b283d44c-d996-450c-9b6c-dea58fe633a7/ovsdb-server-init/0.log" Jan 29 09:25:08 crc kubenswrapper[4895]: I0129 09:25:08.532823 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rzm2l_b283d44c-d996-450c-9b6c-dea58fe633a7/ovsdb-server/0.log" Jan 29 09:25:08 crc kubenswrapper[4895]: I0129 09:25:08.560409 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rzm2l_b283d44c-d996-450c-9b6c-dea58fe633a7/ovs-vswitchd/0.log" Jan 29 09:25:08 crc kubenswrapper[4895]: I0129 09:25:08.573618 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rzm2l_b283d44c-d996-450c-9b6c-dea58fe633a7/ovsdb-server-init/0.log" Jan 29 09:25:08 crc kubenswrapper[4895]: I0129 09:25:08.820770 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d524d5b9-7173-4f57-92f5-bf50a940538b/openstack-network-exporter/0.log" Jan 29 09:25:08 crc kubenswrapper[4895]: I0129 09:25:08.821554 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d524d5b9-7173-4f57-92f5-bf50a940538b/ovn-northd/0.log" Jan 29 09:25:08 crc kubenswrapper[4895]: I0129 09:25:08.911422 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_877924c3-f4b2-4040-8b6c-bbc80d6d58af/openstack-network-exporter/0.log" Jan 29 09:25:09 crc kubenswrapper[4895]: I0129 09:25:09.088001 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_877924c3-f4b2-4040-8b6c-bbc80d6d58af/ovsdbserver-nb/0.log" Jan 29 09:25:09 crc kubenswrapper[4895]: I0129 09:25:09.225062 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_250930c1-98a4-4b5d-a0d7-0ba3063bc098/openstack-network-exporter/0.log" Jan 29 09:25:09 crc kubenswrapper[4895]: I0129 09:25:09.290607 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_250930c1-98a4-4b5d-a0d7-0ba3063bc098/ovsdbserver-sb/0.log" Jan 29 09:25:09 crc kubenswrapper[4895]: I0129 09:25:09.551076 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-796fb887fb-dd2s5_2e7b9632-7a45-48f5-8887-4c79543170fd/placement-api/0.log" Jan 29 09:25:09 crc kubenswrapper[4895]: I0129 09:25:09.585688 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-796fb887fb-dd2s5_2e7b9632-7a45-48f5-8887-4c79543170fd/placement-log/0.log" Jan 29 09:25:09 crc kubenswrapper[4895]: I0129 09:25:09.701033 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b1ce25b0-0fc4-4560-88ba-ee5261d106e9/setup-container/0.log" Jan 29 09:25:10 crc kubenswrapper[4895]: I0129 09:25:10.002994 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b1ce25b0-0fc4-4560-88ba-ee5261d106e9/setup-container/0.log" Jan 29 09:25:10 crc kubenswrapper[4895]: I0129 09:25:10.005647 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b1ce25b0-0fc4-4560-88ba-ee5261d106e9/rabbitmq/0.log" Jan 29 09:25:10 crc kubenswrapper[4895]: I0129 09:25:10.069016 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fb202ed2-1680-4411-83d3-4dcfdc317ac9/setup-container/0.log" Jan 29 09:25:10 crc kubenswrapper[4895]: I0129 09:25:10.327898 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fb202ed2-1680-4411-83d3-4dcfdc317ac9/setup-container/0.log" Jan 29 09:25:10 crc kubenswrapper[4895]: I0129 09:25:10.424556 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fb202ed2-1680-4411-83d3-4dcfdc317ac9/rabbitmq/0.log" Jan 29 09:25:10 crc kubenswrapper[4895]: I0129 09:25:10.566821 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5fb7b47b77-cq2p9_cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687/proxy-httpd/0.log" Jan 29 09:25:10 crc kubenswrapper[4895]: I0129 09:25:10.626026 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5fb7b47b77-cq2p9_cfbdf3a1-a1a5-45af-87ee-c49eb5f9f687/proxy-server/0.log" Jan 29 09:25:10 crc kubenswrapper[4895]: I0129 09:25:10.745192 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-drdk8_073f4b22-319f-4cbb-ac96-c0a18da477a6/swift-ring-rebalance/0.log" Jan 29 09:25:10 crc kubenswrapper[4895]: I0129 09:25:10.887268 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/account-auditor/0.log" Jan 29 09:25:10 crc kubenswrapper[4895]: I0129 09:25:10.977767 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/account-reaper/0.log" Jan 29 09:25:11 crc kubenswrapper[4895]: I0129 09:25:11.065396 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/account-replicator/0.log" Jan 29 09:25:11 crc kubenswrapper[4895]: I0129 09:25:11.158107 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/account-server/0.log" Jan 29 09:25:11 crc kubenswrapper[4895]: I0129 09:25:11.359278 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/container-replicator/0.log" Jan 29 09:25:11 crc kubenswrapper[4895]: I0129 09:25:11.383716 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/container-auditor/0.log" Jan 29 09:25:11 crc kubenswrapper[4895]: I0129 09:25:11.385051 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/container-server/0.log" Jan 29 09:25:11 crc kubenswrapper[4895]: I0129 09:25:11.431197 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/container-updater/0.log" Jan 29 09:25:11 crc kubenswrapper[4895]: I0129 09:25:11.620950 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/object-auditor/0.log" Jan 29 09:25:11 crc kubenswrapper[4895]: I0129 09:25:11.704622 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/object-replicator/0.log" Jan 29 09:25:11 crc kubenswrapper[4895]: I0129 09:25:11.718064 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/object-server/0.log" Jan 29 09:25:11 crc kubenswrapper[4895]: I0129 09:25:11.751096 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/object-expirer/0.log" Jan 29 09:25:11 crc kubenswrapper[4895]: I0129 09:25:11.868649 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/object-updater/0.log" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.015139 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/rsync/0.log" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.029212 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_db0d35a0-7174-452f-bd71-2dae8f7dff11/swift-recon-cron/0.log" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.495565 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mq6ts"] Jan 29 09:25:12 crc kubenswrapper[4895]: E0129 09:25:12.501609 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f51d06-e523-4608-852f-9021f210c26a" containerName="extract-utilities" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.501644 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f51d06-e523-4608-852f-9021f210c26a" containerName="extract-utilities" Jan 29 09:25:12 crc kubenswrapper[4895]: E0129 09:25:12.501662 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f51d06-e523-4608-852f-9021f210c26a" containerName="extract-content" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.501673 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f51d06-e523-4608-852f-9021f210c26a" containerName="extract-content" Jan 29 09:25:12 crc kubenswrapper[4895]: E0129 09:25:12.501690 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f51d06-e523-4608-852f-9021f210c26a" containerName="registry-server" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.501704 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f51d06-e523-4608-852f-9021f210c26a" containerName="registry-server" Jan 29 09:25:12 crc kubenswrapper[4895]: E0129 09:25:12.501717 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4fc6e3-cebd-4ed2-a002-4278d4400d1a" containerName="container-00" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.501723 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4fc6e3-cebd-4ed2-a002-4278d4400d1a" containerName="container-00" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.502005 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="da4fc6e3-cebd-4ed2-a002-4278d4400d1a" containerName="container-00" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.502039 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f51d06-e523-4608-852f-9021f210c26a" containerName="registry-server" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.503717 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.507245 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mq6ts"] Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.587183 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t6wn\" (UniqueName: \"kubernetes.io/projected/4ee8343f-d847-4168-8d96-0028e2702af9-kube-api-access-6t6wn\") pod \"community-operators-mq6ts\" (UID: \"4ee8343f-d847-4168-8d96-0028e2702af9\") " pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.587785 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee8343f-d847-4168-8d96-0028e2702af9-catalog-content\") pod \"community-operators-mq6ts\" (UID: \"4ee8343f-d847-4168-8d96-0028e2702af9\") " pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.587881 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee8343f-d847-4168-8d96-0028e2702af9-utilities\") pod \"community-operators-mq6ts\" (UID: \"4ee8343f-d847-4168-8d96-0028e2702af9\") " pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.696003 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t6wn\" (UniqueName: \"kubernetes.io/projected/4ee8343f-d847-4168-8d96-0028e2702af9-kube-api-access-6t6wn\") pod \"community-operators-mq6ts\" (UID: \"4ee8343f-d847-4168-8d96-0028e2702af9\") " pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.696217 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee8343f-d847-4168-8d96-0028e2702af9-catalog-content\") pod \"community-operators-mq6ts\" (UID: \"4ee8343f-d847-4168-8d96-0028e2702af9\") " pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.696371 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee8343f-d847-4168-8d96-0028e2702af9-utilities\") pod \"community-operators-mq6ts\" (UID: \"4ee8343f-d847-4168-8d96-0028e2702af9\") " pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.697321 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee8343f-d847-4168-8d96-0028e2702af9-utilities\") pod \"community-operators-mq6ts\" (UID: \"4ee8343f-d847-4168-8d96-0028e2702af9\") " pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.698484 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee8343f-d847-4168-8d96-0028e2702af9-catalog-content\") pod \"community-operators-mq6ts\" (UID: \"4ee8343f-d847-4168-8d96-0028e2702af9\") " pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.733971 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t6wn\" (UniqueName: \"kubernetes.io/projected/4ee8343f-d847-4168-8d96-0028e2702af9-kube-api-access-6t6wn\") pod \"community-operators-mq6ts\" (UID: \"4ee8343f-d847-4168-8d96-0028e2702af9\") " pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.833687 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.895771 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4b9pg"] Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.902081 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:12 crc kubenswrapper[4895]: I0129 09:25:12.906414 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b9pg"] Jan 29 09:25:13 crc kubenswrapper[4895]: I0129 09:25:13.019270 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmjvd\" (UniqueName: \"kubernetes.io/projected/ca11af0c-41c1-4c97-9d03-bcd373a67f66-kube-api-access-mmjvd\") pod \"redhat-marketplace-4b9pg\" (UID: \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\") " pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:13 crc kubenswrapper[4895]: I0129 09:25:13.019431 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca11af0c-41c1-4c97-9d03-bcd373a67f66-catalog-content\") pod \"redhat-marketplace-4b9pg\" (UID: \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\") " pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:13 crc kubenswrapper[4895]: I0129 09:25:13.019463 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca11af0c-41c1-4c97-9d03-bcd373a67f66-utilities\") pod \"redhat-marketplace-4b9pg\" (UID: \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\") " pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:13 crc kubenswrapper[4895]: I0129 09:25:13.133250 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca11af0c-41c1-4c97-9d03-bcd373a67f66-catalog-content\") pod \"redhat-marketplace-4b9pg\" (UID: \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\") " pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:13 crc kubenswrapper[4895]: I0129 09:25:13.133744 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca11af0c-41c1-4c97-9d03-bcd373a67f66-utilities\") pod \"redhat-marketplace-4b9pg\" (UID: \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\") " pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:13 crc kubenswrapper[4895]: I0129 09:25:13.134008 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmjvd\" (UniqueName: \"kubernetes.io/projected/ca11af0c-41c1-4c97-9d03-bcd373a67f66-kube-api-access-mmjvd\") pod \"redhat-marketplace-4b9pg\" (UID: \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\") " pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:13 crc kubenswrapper[4895]: I0129 09:25:13.134134 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca11af0c-41c1-4c97-9d03-bcd373a67f66-catalog-content\") pod \"redhat-marketplace-4b9pg\" (UID: \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\") " pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:13 crc kubenswrapper[4895]: I0129 09:25:13.134550 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca11af0c-41c1-4c97-9d03-bcd373a67f66-utilities\") pod \"redhat-marketplace-4b9pg\" (UID: \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\") " pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:13 crc kubenswrapper[4895]: I0129 09:25:13.194116 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmjvd\" (UniqueName: \"kubernetes.io/projected/ca11af0c-41c1-4c97-9d03-bcd373a67f66-kube-api-access-mmjvd\") pod \"redhat-marketplace-4b9pg\" (UID: \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\") " pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:13 crc kubenswrapper[4895]: I0129 09:25:13.340636 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:13 crc kubenswrapper[4895]: I0129 09:25:13.589831 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mq6ts"] Jan 29 09:25:14 crc kubenswrapper[4895]: I0129 09:25:14.024156 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b9pg"] Jan 29 09:25:14 crc kubenswrapper[4895]: I0129 09:25:14.254610 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b9pg" event={"ID":"ca11af0c-41c1-4c97-9d03-bcd373a67f66","Type":"ContainerStarted","Data":"76eb7e3d5a5a2535d3a800925963195096774ac6b8db4f1824e0d8ccd6c06871"} Jan 29 09:25:14 crc kubenswrapper[4895]: I0129 09:25:14.263754 4895 generic.go:334] "Generic (PLEG): container finished" podID="4ee8343f-d847-4168-8d96-0028e2702af9" containerID="1175a9556ea03e8ea8972fab48ca1067b93bb1daaf8478c99cd02c5fa84509bc" exitCode=0 Jan 29 09:25:14 crc kubenswrapper[4895]: I0129 09:25:14.263817 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq6ts" event={"ID":"4ee8343f-d847-4168-8d96-0028e2702af9","Type":"ContainerDied","Data":"1175a9556ea03e8ea8972fab48ca1067b93bb1daaf8478c99cd02c5fa84509bc"} Jan 29 09:25:14 crc kubenswrapper[4895]: I0129 09:25:14.263861 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq6ts" event={"ID":"4ee8343f-d847-4168-8d96-0028e2702af9","Type":"ContainerStarted","Data":"4b80260825d7ca0f58dd6b4011fec6195fb9a70541aba56040771aa0ebc1a4b9"} Jan 29 09:25:15 crc kubenswrapper[4895]: I0129 09:25:15.283449 4895 generic.go:334] "Generic (PLEG): container finished" podID="ca11af0c-41c1-4c97-9d03-bcd373a67f66" containerID="2a9b6bf850c2b00b1a6fb8847cb565bcf4a4ebb961fd0d95dc36be3c6b9e4d99" exitCode=0 Jan 29 09:25:15 crc kubenswrapper[4895]: I0129 09:25:15.285538 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b9pg" event={"ID":"ca11af0c-41c1-4c97-9d03-bcd373a67f66","Type":"ContainerDied","Data":"2a9b6bf850c2b00b1a6fb8847cb565bcf4a4ebb961fd0d95dc36be3c6b9e4d99"} Jan 29 09:25:15 crc kubenswrapper[4895]: I0129 09:25:15.296805 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq6ts" event={"ID":"4ee8343f-d847-4168-8d96-0028e2702af9","Type":"ContainerStarted","Data":"77659f2cf15dfdf1e0f3f6a354867badfe6b52e67cb6e5afcbd516a3cc2aea76"} Jan 29 09:25:16 crc kubenswrapper[4895]: I0129 09:25:16.311387 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b9pg" event={"ID":"ca11af0c-41c1-4c97-9d03-bcd373a67f66","Type":"ContainerStarted","Data":"d9544179c56ff31ae2cef93f51fb50c349969805ac885ea8c581062b5bf09a98"} Jan 29 09:25:16 crc kubenswrapper[4895]: I0129 09:25:16.316269 4895 generic.go:334] "Generic (PLEG): container finished" podID="4ee8343f-d847-4168-8d96-0028e2702af9" containerID="77659f2cf15dfdf1e0f3f6a354867badfe6b52e67cb6e5afcbd516a3cc2aea76" exitCode=0 Jan 29 09:25:16 crc kubenswrapper[4895]: I0129 09:25:16.316335 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq6ts" event={"ID":"4ee8343f-d847-4168-8d96-0028e2702af9","Type":"ContainerDied","Data":"77659f2cf15dfdf1e0f3f6a354867badfe6b52e67cb6e5afcbd516a3cc2aea76"} Jan 29 09:25:16 crc kubenswrapper[4895]: I0129 09:25:16.516031 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d720a04a-6de4-4dd9-b918-471d3d69de73/memcached/0.log" Jan 29 09:25:17 crc kubenswrapper[4895]: I0129 09:25:17.330422 4895 generic.go:334] "Generic (PLEG): container finished" podID="ca11af0c-41c1-4c97-9d03-bcd373a67f66" containerID="d9544179c56ff31ae2cef93f51fb50c349969805ac885ea8c581062b5bf09a98" exitCode=0 Jan 29 09:25:17 crc kubenswrapper[4895]: I0129 09:25:17.330542 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b9pg" event={"ID":"ca11af0c-41c1-4c97-9d03-bcd373a67f66","Type":"ContainerDied","Data":"d9544179c56ff31ae2cef93f51fb50c349969805ac885ea8c581062b5bf09a98"} Jan 29 09:25:17 crc kubenswrapper[4895]: I0129 09:25:17.335891 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq6ts" event={"ID":"4ee8343f-d847-4168-8d96-0028e2702af9","Type":"ContainerStarted","Data":"c0eca0e337426aed0428715da0f4a4c425e2523090c7477eaeda5fdb05c6449e"} Jan 29 09:25:17 crc kubenswrapper[4895]: I0129 09:25:17.380829 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mq6ts" podStartSLOduration=2.940583554 podStartE2EDuration="5.380804059s" podCreationTimestamp="2026-01-29 09:25:12 +0000 UTC" firstStartedPulling="2026-01-29 09:25:14.276809946 +0000 UTC m=+2655.918318102" lastFinishedPulling="2026-01-29 09:25:16.717030461 +0000 UTC m=+2658.358538607" observedRunningTime="2026-01-29 09:25:17.37371449 +0000 UTC m=+2659.015222636" watchObservedRunningTime="2026-01-29 09:25:17.380804059 +0000 UTC m=+2659.022312205" Jan 29 09:25:18 crc kubenswrapper[4895]: I0129 09:25:18.349285 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b9pg" event={"ID":"ca11af0c-41c1-4c97-9d03-bcd373a67f66","Type":"ContainerStarted","Data":"960dbce723ee5a6e361e51fc600f0341e8d67db40dd8ef18670cdb49d8646040"} Jan 29 09:25:18 crc kubenswrapper[4895]: I0129 09:25:18.382005 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4b9pg" podStartSLOduration=3.91311309 podStartE2EDuration="6.381891716s" podCreationTimestamp="2026-01-29 09:25:12 +0000 UTC" firstStartedPulling="2026-01-29 09:25:15.286407099 +0000 UTC m=+2656.927915245" lastFinishedPulling="2026-01-29 09:25:17.755185725 +0000 UTC m=+2659.396693871" observedRunningTime="2026-01-29 09:25:18.37041263 +0000 UTC m=+2660.011920776" watchObservedRunningTime="2026-01-29 09:25:18.381891716 +0000 UTC m=+2660.023399862" Jan 29 09:25:22 crc kubenswrapper[4895]: I0129 09:25:22.835732 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:22 crc kubenswrapper[4895]: I0129 09:25:22.836882 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:22 crc kubenswrapper[4895]: I0129 09:25:22.899632 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:23 crc kubenswrapper[4895]: I0129 09:25:23.342549 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:23 crc kubenswrapper[4895]: I0129 09:25:23.342599 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:23 crc kubenswrapper[4895]: I0129 09:25:23.408781 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:23 crc kubenswrapper[4895]: I0129 09:25:23.457323 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:23 crc kubenswrapper[4895]: I0129 09:25:23.459725 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:24 crc kubenswrapper[4895]: I0129 09:25:24.481165 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b9pg"] Jan 29 09:25:25 crc kubenswrapper[4895]: I0129 09:25:25.412777 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4b9pg" podUID="ca11af0c-41c1-4c97-9d03-bcd373a67f66" containerName="registry-server" containerID="cri-o://960dbce723ee5a6e361e51fc600f0341e8d67db40dd8ef18670cdb49d8646040" gracePeriod=2 Jan 29 09:25:25 crc kubenswrapper[4895]: I0129 09:25:25.883893 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mq6ts"] Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.017235 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.091130 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmjvd\" (UniqueName: \"kubernetes.io/projected/ca11af0c-41c1-4c97-9d03-bcd373a67f66-kube-api-access-mmjvd\") pod \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\" (UID: \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\") " Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.091814 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca11af0c-41c1-4c97-9d03-bcd373a67f66-catalog-content\") pod \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\" (UID: \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\") " Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.092132 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca11af0c-41c1-4c97-9d03-bcd373a67f66-utilities\") pod \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\" (UID: \"ca11af0c-41c1-4c97-9d03-bcd373a67f66\") " Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.092963 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca11af0c-41c1-4c97-9d03-bcd373a67f66-utilities" (OuterVolumeSpecName: "utilities") pod "ca11af0c-41c1-4c97-9d03-bcd373a67f66" (UID: "ca11af0c-41c1-4c97-9d03-bcd373a67f66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.093363 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca11af0c-41c1-4c97-9d03-bcd373a67f66-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.112802 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca11af0c-41c1-4c97-9d03-bcd373a67f66-kube-api-access-mmjvd" (OuterVolumeSpecName: "kube-api-access-mmjvd") pod "ca11af0c-41c1-4c97-9d03-bcd373a67f66" (UID: "ca11af0c-41c1-4c97-9d03-bcd373a67f66"). InnerVolumeSpecName "kube-api-access-mmjvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.118526 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca11af0c-41c1-4c97-9d03-bcd373a67f66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca11af0c-41c1-4c97-9d03-bcd373a67f66" (UID: "ca11af0c-41c1-4c97-9d03-bcd373a67f66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.196766 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmjvd\" (UniqueName: \"kubernetes.io/projected/ca11af0c-41c1-4c97-9d03-bcd373a67f66-kube-api-access-mmjvd\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.196816 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca11af0c-41c1-4c97-9d03-bcd373a67f66-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.426966 4895 generic.go:334] "Generic (PLEG): container finished" podID="ca11af0c-41c1-4c97-9d03-bcd373a67f66" containerID="960dbce723ee5a6e361e51fc600f0341e8d67db40dd8ef18670cdb49d8646040" exitCode=0 Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.427048 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b9pg" event={"ID":"ca11af0c-41c1-4c97-9d03-bcd373a67f66","Type":"ContainerDied","Data":"960dbce723ee5a6e361e51fc600f0341e8d67db40dd8ef18670cdb49d8646040"} Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.427527 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b9pg" event={"ID":"ca11af0c-41c1-4c97-9d03-bcd373a67f66","Type":"ContainerDied","Data":"76eb7e3d5a5a2535d3a800925963195096774ac6b8db4f1824e0d8ccd6c06871"} Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.427096 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4b9pg" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.427680 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mq6ts" podUID="4ee8343f-d847-4168-8d96-0028e2702af9" containerName="registry-server" containerID="cri-o://c0eca0e337426aed0428715da0f4a4c425e2523090c7477eaeda5fdb05c6449e" gracePeriod=2 Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.427575 4895 scope.go:117] "RemoveContainer" containerID="960dbce723ee5a6e361e51fc600f0341e8d67db40dd8ef18670cdb49d8646040" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.457667 4895 scope.go:117] "RemoveContainer" containerID="d9544179c56ff31ae2cef93f51fb50c349969805ac885ea8c581062b5bf09a98" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.478149 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b9pg"] Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.489174 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b9pg"] Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.493404 4895 scope.go:117] "RemoveContainer" containerID="2a9b6bf850c2b00b1a6fb8847cb565bcf4a4ebb961fd0d95dc36be3c6b9e4d99" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.646266 4895 scope.go:117] "RemoveContainer" containerID="960dbce723ee5a6e361e51fc600f0341e8d67db40dd8ef18670cdb49d8646040" Jan 29 09:25:26 crc kubenswrapper[4895]: E0129 09:25:26.646877 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960dbce723ee5a6e361e51fc600f0341e8d67db40dd8ef18670cdb49d8646040\": container with ID starting with 960dbce723ee5a6e361e51fc600f0341e8d67db40dd8ef18670cdb49d8646040 not found: ID does not exist" containerID="960dbce723ee5a6e361e51fc600f0341e8d67db40dd8ef18670cdb49d8646040" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.646922 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960dbce723ee5a6e361e51fc600f0341e8d67db40dd8ef18670cdb49d8646040"} err="failed to get container status \"960dbce723ee5a6e361e51fc600f0341e8d67db40dd8ef18670cdb49d8646040\": rpc error: code = NotFound desc = could not find container \"960dbce723ee5a6e361e51fc600f0341e8d67db40dd8ef18670cdb49d8646040\": container with ID starting with 960dbce723ee5a6e361e51fc600f0341e8d67db40dd8ef18670cdb49d8646040 not found: ID does not exist" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.646950 4895 scope.go:117] "RemoveContainer" containerID="d9544179c56ff31ae2cef93f51fb50c349969805ac885ea8c581062b5bf09a98" Jan 29 09:25:26 crc kubenswrapper[4895]: E0129 09:25:26.647581 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9544179c56ff31ae2cef93f51fb50c349969805ac885ea8c581062b5bf09a98\": container with ID starting with d9544179c56ff31ae2cef93f51fb50c349969805ac885ea8c581062b5bf09a98 not found: ID does not exist" containerID="d9544179c56ff31ae2cef93f51fb50c349969805ac885ea8c581062b5bf09a98" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.647637 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9544179c56ff31ae2cef93f51fb50c349969805ac885ea8c581062b5bf09a98"} err="failed to get container status \"d9544179c56ff31ae2cef93f51fb50c349969805ac885ea8c581062b5bf09a98\": rpc error: code = NotFound desc = could not find container \"d9544179c56ff31ae2cef93f51fb50c349969805ac885ea8c581062b5bf09a98\": container with ID starting with d9544179c56ff31ae2cef93f51fb50c349969805ac885ea8c581062b5bf09a98 not found: ID does not exist" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.647678 4895 scope.go:117] "RemoveContainer" containerID="2a9b6bf850c2b00b1a6fb8847cb565bcf4a4ebb961fd0d95dc36be3c6b9e4d99" Jan 29 09:25:26 crc kubenswrapper[4895]: E0129 09:25:26.648510 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a9b6bf850c2b00b1a6fb8847cb565bcf4a4ebb961fd0d95dc36be3c6b9e4d99\": container with ID starting with 2a9b6bf850c2b00b1a6fb8847cb565bcf4a4ebb961fd0d95dc36be3c6b9e4d99 not found: ID does not exist" containerID="2a9b6bf850c2b00b1a6fb8847cb565bcf4a4ebb961fd0d95dc36be3c6b9e4d99" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.648595 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9b6bf850c2b00b1a6fb8847cb565bcf4a4ebb961fd0d95dc36be3c6b9e4d99"} err="failed to get container status \"2a9b6bf850c2b00b1a6fb8847cb565bcf4a4ebb961fd0d95dc36be3c6b9e4d99\": rpc error: code = NotFound desc = could not find container \"2a9b6bf850c2b00b1a6fb8847cb565bcf4a4ebb961fd0d95dc36be3c6b9e4d99\": container with ID starting with 2a9b6bf850c2b00b1a6fb8847cb565bcf4a4ebb961fd0d95dc36be3c6b9e4d99 not found: ID does not exist" Jan 29 09:25:26 crc kubenswrapper[4895]: I0129 09:25:26.980217 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.118021 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t6wn\" (UniqueName: \"kubernetes.io/projected/4ee8343f-d847-4168-8d96-0028e2702af9-kube-api-access-6t6wn\") pod \"4ee8343f-d847-4168-8d96-0028e2702af9\" (UID: \"4ee8343f-d847-4168-8d96-0028e2702af9\") " Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.118261 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee8343f-d847-4168-8d96-0028e2702af9-catalog-content\") pod \"4ee8343f-d847-4168-8d96-0028e2702af9\" (UID: \"4ee8343f-d847-4168-8d96-0028e2702af9\") " Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.118323 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee8343f-d847-4168-8d96-0028e2702af9-utilities\") pod \"4ee8343f-d847-4168-8d96-0028e2702af9\" (UID: \"4ee8343f-d847-4168-8d96-0028e2702af9\") " Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.119620 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ee8343f-d847-4168-8d96-0028e2702af9-utilities" (OuterVolumeSpecName: "utilities") pod "4ee8343f-d847-4168-8d96-0028e2702af9" (UID: "4ee8343f-d847-4168-8d96-0028e2702af9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.127235 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee8343f-d847-4168-8d96-0028e2702af9-kube-api-access-6t6wn" (OuterVolumeSpecName: "kube-api-access-6t6wn") pod "4ee8343f-d847-4168-8d96-0028e2702af9" (UID: "4ee8343f-d847-4168-8d96-0028e2702af9"). InnerVolumeSpecName "kube-api-access-6t6wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.179479 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ee8343f-d847-4168-8d96-0028e2702af9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ee8343f-d847-4168-8d96-0028e2702af9" (UID: "4ee8343f-d847-4168-8d96-0028e2702af9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.220923 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t6wn\" (UniqueName: \"kubernetes.io/projected/4ee8343f-d847-4168-8d96-0028e2702af9-kube-api-access-6t6wn\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.220965 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee8343f-d847-4168-8d96-0028e2702af9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.220979 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee8343f-d847-4168-8d96-0028e2702af9-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.224834 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca11af0c-41c1-4c97-9d03-bcd373a67f66" path="/var/lib/kubelet/pods/ca11af0c-41c1-4c97-9d03-bcd373a67f66/volumes" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.443303 4895 generic.go:334] "Generic (PLEG): container finished" podID="4ee8343f-d847-4168-8d96-0028e2702af9" containerID="c0eca0e337426aed0428715da0f4a4c425e2523090c7477eaeda5fdb05c6449e" exitCode=0 Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.443407 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq6ts" event={"ID":"4ee8343f-d847-4168-8d96-0028e2702af9","Type":"ContainerDied","Data":"c0eca0e337426aed0428715da0f4a4c425e2523090c7477eaeda5fdb05c6449e"} Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.443530 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq6ts" event={"ID":"4ee8343f-d847-4168-8d96-0028e2702af9","Type":"ContainerDied","Data":"4b80260825d7ca0f58dd6b4011fec6195fb9a70541aba56040771aa0ebc1a4b9"} Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.443564 4895 scope.go:117] "RemoveContainer" containerID="c0eca0e337426aed0428715da0f4a4c425e2523090c7477eaeda5fdb05c6449e" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.443982 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mq6ts" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.466976 4895 scope.go:117] "RemoveContainer" containerID="77659f2cf15dfdf1e0f3f6a354867badfe6b52e67cb6e5afcbd516a3cc2aea76" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.478151 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mq6ts"] Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.493444 4895 scope.go:117] "RemoveContainer" containerID="1175a9556ea03e8ea8972fab48ca1067b93bb1daaf8478c99cd02c5fa84509bc" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.498872 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mq6ts"] Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.519520 4895 scope.go:117] "RemoveContainer" containerID="c0eca0e337426aed0428715da0f4a4c425e2523090c7477eaeda5fdb05c6449e" Jan 29 09:25:27 crc kubenswrapper[4895]: E0129 09:25:27.520368 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0eca0e337426aed0428715da0f4a4c425e2523090c7477eaeda5fdb05c6449e\": container with ID starting with c0eca0e337426aed0428715da0f4a4c425e2523090c7477eaeda5fdb05c6449e not found: ID does not exist" containerID="c0eca0e337426aed0428715da0f4a4c425e2523090c7477eaeda5fdb05c6449e" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.520438 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0eca0e337426aed0428715da0f4a4c425e2523090c7477eaeda5fdb05c6449e"} err="failed to get container status \"c0eca0e337426aed0428715da0f4a4c425e2523090c7477eaeda5fdb05c6449e\": rpc error: code = NotFound desc = could not find container \"c0eca0e337426aed0428715da0f4a4c425e2523090c7477eaeda5fdb05c6449e\": container with ID starting with c0eca0e337426aed0428715da0f4a4c425e2523090c7477eaeda5fdb05c6449e not found: ID does not exist" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.520482 4895 scope.go:117] "RemoveContainer" containerID="77659f2cf15dfdf1e0f3f6a354867badfe6b52e67cb6e5afcbd516a3cc2aea76" Jan 29 09:25:27 crc kubenswrapper[4895]: E0129 09:25:27.520967 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77659f2cf15dfdf1e0f3f6a354867badfe6b52e67cb6e5afcbd516a3cc2aea76\": container with ID starting with 77659f2cf15dfdf1e0f3f6a354867badfe6b52e67cb6e5afcbd516a3cc2aea76 not found: ID does not exist" containerID="77659f2cf15dfdf1e0f3f6a354867badfe6b52e67cb6e5afcbd516a3cc2aea76" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.521023 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77659f2cf15dfdf1e0f3f6a354867badfe6b52e67cb6e5afcbd516a3cc2aea76"} err="failed to get container status \"77659f2cf15dfdf1e0f3f6a354867badfe6b52e67cb6e5afcbd516a3cc2aea76\": rpc error: code = NotFound desc = could not find container \"77659f2cf15dfdf1e0f3f6a354867badfe6b52e67cb6e5afcbd516a3cc2aea76\": container with ID starting with 77659f2cf15dfdf1e0f3f6a354867badfe6b52e67cb6e5afcbd516a3cc2aea76 not found: ID does not exist" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.521058 4895 scope.go:117] "RemoveContainer" containerID="1175a9556ea03e8ea8972fab48ca1067b93bb1daaf8478c99cd02c5fa84509bc" Jan 29 09:25:27 crc kubenswrapper[4895]: E0129 09:25:27.521358 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1175a9556ea03e8ea8972fab48ca1067b93bb1daaf8478c99cd02c5fa84509bc\": container with ID starting with 1175a9556ea03e8ea8972fab48ca1067b93bb1daaf8478c99cd02c5fa84509bc not found: ID does not exist" containerID="1175a9556ea03e8ea8972fab48ca1067b93bb1daaf8478c99cd02c5fa84509bc" Jan 29 09:25:27 crc kubenswrapper[4895]: I0129 09:25:27.521415 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1175a9556ea03e8ea8972fab48ca1067b93bb1daaf8478c99cd02c5fa84509bc"} err="failed to get container status \"1175a9556ea03e8ea8972fab48ca1067b93bb1daaf8478c99cd02c5fa84509bc\": rpc error: code = NotFound desc = could not find container \"1175a9556ea03e8ea8972fab48ca1067b93bb1daaf8478c99cd02c5fa84509bc\": container with ID starting with 1175a9556ea03e8ea8972fab48ca1067b93bb1daaf8478c99cd02c5fa84509bc not found: ID does not exist" Jan 29 09:25:29 crc kubenswrapper[4895]: I0129 09:25:29.225080 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ee8343f-d847-4168-8d96-0028e2702af9" path="/var/lib/kubelet/pods/4ee8343f-d847-4168-8d96-0028e2702af9/volumes" Jan 29 09:25:40 crc kubenswrapper[4895]: I0129 09:25:40.544565 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-gd75d_bc16fc79-c074-4969-af29-c46fdd06f9f8/manager/0.log" Jan 29 09:25:40 crc kubenswrapper[4895]: I0129 09:25:40.693494 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8_f1518b1d-569a-475c-ac03-5ccf624c3a36/util/0.log" Jan 29 09:25:40 crc kubenswrapper[4895]: I0129 09:25:40.905584 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8_f1518b1d-569a-475c-ac03-5ccf624c3a36/pull/0.log" Jan 29 09:25:40 crc kubenswrapper[4895]: I0129 09:25:40.934129 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8_f1518b1d-569a-475c-ac03-5ccf624c3a36/util/0.log" Jan 29 09:25:40 crc kubenswrapper[4895]: I0129 09:25:40.948376 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8_f1518b1d-569a-475c-ac03-5ccf624c3a36/pull/0.log" Jan 29 09:25:41 crc kubenswrapper[4895]: I0129 09:25:41.116298 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8_f1518b1d-569a-475c-ac03-5ccf624c3a36/pull/0.log" Jan 29 09:25:41 crc kubenswrapper[4895]: I0129 09:25:41.137222 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8_f1518b1d-569a-475c-ac03-5ccf624c3a36/extract/0.log" Jan 29 09:25:41 crc kubenswrapper[4895]: I0129 09:25:41.200470 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c96b145007a2404ab8a69ddeaa84b14f1e1c3894309eb0acc9880e3096p2lr8_f1518b1d-569a-475c-ac03-5ccf624c3a36/util/0.log" Jan 29 09:25:41 crc kubenswrapper[4895]: I0129 09:25:41.418361 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-6cz2h_d4d2a9b0-6258-4257-9824-74abbbc40b24/manager/0.log" Jan 29 09:25:41 crc kubenswrapper[4895]: I0129 09:25:41.456795 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-58zzj_b2dd46da-1ebf-489f-8467-eab7fc206736/manager/0.log" Jan 29 09:25:41 crc kubenswrapper[4895]: I0129 09:25:41.699128 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-7hp5l_e97a1d25-e9ba-4ce2-b172-035afb18721b/manager/0.log" Jan 29 09:25:41 crc kubenswrapper[4895]: I0129 09:25:41.735073 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-sdkzk_5e73fff0-3497-4937-bfe0-10bea87ddeb3/manager/0.log" Jan 29 09:25:41 crc kubenswrapper[4895]: I0129 09:25:41.943287 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-9dpss_ad4a2a80-d64f-45b0-bea8-dc2e5c7ea050/manager/0.log" Jan 29 09:25:42 crc kubenswrapper[4895]: I0129 09:25:42.461813 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54c4948594-l45qb_baa89b4d-cf32-498b-a624-585afea7f964/manager/0.log" Jan 29 09:25:42 crc kubenswrapper[4895]: I0129 09:25:42.482661 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-tptkw_cbca22f6-6189-4f59-b9bd-832466c437d1/manager/0.log" Jan 29 09:25:42 crc kubenswrapper[4895]: I0129 09:25:42.497412 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-8t4nd_348e067e-1b54-43e2-9c01-bf430f7a3630/manager/0.log" Jan 29 09:25:42 crc kubenswrapper[4895]: I0129 09:25:42.667311 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-pq8r4_358815d3-7542-429d-bfa0-742e75ada2f6/manager/0.log" Jan 29 09:25:42 crc kubenswrapper[4895]: I0129 09:25:42.744339 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-dg5kf_bb23ce65-61d9-4868-8008-7582ded2bff2/manager/0.log" Jan 29 09:25:43 crc kubenswrapper[4895]: I0129 09:25:43.013286 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-zbdxv_c57b39e7-275d-4ef2-af51-3e0b014182ee/manager/0.log" Jan 29 09:25:43 crc kubenswrapper[4895]: I0129 09:25:43.112441 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-zpdkh_f7276bca-f319-46bf-a1b4-92a6aec8e6e6/manager/0.log" Jan 29 09:25:43 crc kubenswrapper[4895]: I0129 09:25:43.191551 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-qz9c2_001b758d-81ef-40e5-b53a-7c264915580d/manager/0.log" Jan 29 09:25:43 crc kubenswrapper[4895]: I0129 09:25:43.311688 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dlbf8d_d1c9e344-0b8e-40a8-87a3-a6fd886f1eb8/manager/0.log" Jan 29 09:25:43 crc kubenswrapper[4895]: I0129 09:25:43.624938 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-777976898d-2mx8n_5567d75e-d4d1-4f59-a79b-b185eaadd750/operator/0.log" Jan 29 09:25:43 crc kubenswrapper[4895]: I0129 09:25:43.810707 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-56sqg_a833ad23-634a-4270-a6aa-267480e7bb2a/registry-server/0.log" Jan 29 09:25:44 crc kubenswrapper[4895]: I0129 09:25:44.067118 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-mj7xz_6bf40523-2804-408c-b50d-cb04bf5b32fc/manager/0.log" Jan 29 09:25:44 crc kubenswrapper[4895]: I0129 09:25:44.196290 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-mnp2h_bf9282d5-a557-4321-b05d-35552e124429/manager/0.log" Jan 29 09:25:44 crc kubenswrapper[4895]: I0129 09:25:44.506618 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-569b5dc57f-cn6fr_22d12b29-fd4e-4aa2-9081-a79a3a539dab/manager/0.log" Jan 29 09:25:44 crc kubenswrapper[4895]: I0129 09:25:44.594938 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-pntdq_c268affd-83d0-4313-a5ba-ee20846ad416/manager/0.log" Jan 29 09:25:45 crc kubenswrapper[4895]: I0129 09:25:45.645597 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fh6n2_1c9af700-ef2b-4d02-a76f-77d31d981a5f/operator/0.log" Jan 29 09:25:45 crc kubenswrapper[4895]: I0129 09:25:45.675729 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-fczp5_853077df-3183-4811-8554-5940dc41912e/manager/0.log" Jan 29 09:25:45 crc kubenswrapper[4895]: I0129 09:25:45.845390 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-4zrlz_7520cf55-cb4a-4598-80d9-499ab60f5ff1/manager/0.log" Jan 29 09:25:45 crc kubenswrapper[4895]: I0129 09:25:45.853360 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-gxq7x_7573c3c1-4b9d-4175-beef-8a4d0c604b6a/manager/0.log" Jan 29 09:26:08 crc kubenswrapper[4895]: I0129 09:26:08.518097 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-f5cn6_13c23359-7d69-4f3c-b89a-a25bee602474/control-plane-machine-set-operator/0.log" Jan 29 09:26:08 crc kubenswrapper[4895]: I0129 09:26:08.745462 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xc6q5_5203d54b-a735-4118-bae0-7554299a98cf/kube-rbac-proxy/0.log" Jan 29 09:26:08 crc kubenswrapper[4895]: I0129 09:26:08.794629 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xc6q5_5203d54b-a735-4118-bae0-7554299a98cf/machine-api-operator/0.log" Jan 29 09:26:22 crc kubenswrapper[4895]: I0129 09:26:22.603098 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-lmpk5_6b07c0c4-eb39-4313-b842-9a36bd400bae/cert-manager-controller/0.log" Jan 29 09:26:22 crc kubenswrapper[4895]: I0129 09:26:22.824312 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rwbcg_754acefa-2366-4c3a-97be-e4a941d8066b/cert-manager-cainjector/0.log" Jan 29 09:26:22 crc kubenswrapper[4895]: I0129 09:26:22.936087 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-ttlgx_0e41817c-460a-4a92-9220-10fde5db690b/cert-manager-webhook/0.log" Jan 29 09:26:36 crc kubenswrapper[4895]: I0129 09:26:36.586318 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-lt6cs_e5b25585-8953-42bb-a128-13272bda1f87/nmstate-console-plugin/0.log" Jan 29 09:26:36 crc kubenswrapper[4895]: I0129 09:26:36.812274 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-62g2t_2a149626-5a36-418c-b7a2-87ff50e92c34/nmstate-handler/0.log" Jan 29 09:26:36 crc kubenswrapper[4895]: I0129 09:26:36.909109 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-qg2h4_2ce6529a-8832-46df-b211-7d7f2388214b/kube-rbac-proxy/0.log" Jan 29 09:26:37 crc kubenswrapper[4895]: I0129 09:26:37.051504 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-qg2h4_2ce6529a-8832-46df-b211-7d7f2388214b/nmstate-metrics/0.log" Jan 29 09:26:37 crc kubenswrapper[4895]: I0129 09:26:37.085880 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-nbgdz_686e1923-3a25-460b-b2f1-636cd6039ffe/nmstate-operator/0.log" Jan 29 09:26:37 crc kubenswrapper[4895]: I0129 09:26:37.275347 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-mgwfl_df364a5d-82b0-43f6-9e56-fb2fd0fef1e2/nmstate-webhook/0.log" Jan 29 09:26:46 crc kubenswrapper[4895]: I0129 09:26:46.021089 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:26:46 crc kubenswrapper[4895]: I0129 09:26:46.021989 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:27:08 crc kubenswrapper[4895]: I0129 09:27:08.076050 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-68xht_3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f/kube-rbac-proxy/0.log" Jan 29 09:27:08 crc kubenswrapper[4895]: I0129 09:27:08.150687 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-68xht_3ea4f51f-f0bd-408c-8fbe-d38e86e52f2f/controller/0.log" Jan 29 09:27:08 crc kubenswrapper[4895]: I0129 09:27:08.372597 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-frr-files/0.log" Jan 29 09:27:08 crc kubenswrapper[4895]: I0129 09:27:08.584095 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-reloader/0.log" Jan 29 09:27:08 crc kubenswrapper[4895]: I0129 09:27:08.629288 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-reloader/0.log" Jan 29 09:27:08 crc kubenswrapper[4895]: I0129 09:27:08.629692 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-frr-files/0.log" Jan 29 09:27:08 crc kubenswrapper[4895]: I0129 09:27:08.643519 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-metrics/0.log" Jan 29 09:27:08 crc kubenswrapper[4895]: I0129 09:27:08.860761 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-frr-files/0.log" Jan 29 09:27:08 crc kubenswrapper[4895]: I0129 09:27:08.881050 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-metrics/0.log" Jan 29 09:27:08 crc kubenswrapper[4895]: I0129 09:27:08.903872 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-reloader/0.log" Jan 29 09:27:08 crc kubenswrapper[4895]: I0129 09:27:08.937188 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-metrics/0.log" Jan 29 09:27:09 crc kubenswrapper[4895]: I0129 09:27:09.116850 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-metrics/0.log" Jan 29 09:27:09 crc kubenswrapper[4895]: I0129 09:27:09.124259 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-reloader/0.log" Jan 29 09:27:09 crc kubenswrapper[4895]: I0129 09:27:09.127451 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/cp-frr-files/0.log" Jan 29 09:27:09 crc kubenswrapper[4895]: I0129 09:27:09.171607 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/controller/0.log" Jan 29 09:27:09 crc kubenswrapper[4895]: I0129 09:27:09.336874 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/kube-rbac-proxy/0.log" Jan 29 09:27:09 crc kubenswrapper[4895]: I0129 09:27:09.388540 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/frr-metrics/0.log" Jan 29 09:27:09 crc kubenswrapper[4895]: I0129 09:27:09.409173 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/kube-rbac-proxy-frr/0.log" Jan 29 09:27:09 crc kubenswrapper[4895]: I0129 09:27:09.610620 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/reloader/0.log" Jan 29 09:27:09 crc kubenswrapper[4895]: I0129 09:27:09.718177 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-jm6jg_5d4d4832-512a-4d5c-b6ea-8a90b2ad3297/frr-k8s-webhook-server/0.log" Jan 29 09:27:10 crc kubenswrapper[4895]: I0129 09:27:10.013730 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6695fc676d-4fxsl_d82c9dec-3917-4cb6-91f0-ee9b6ab253e7/manager/0.log" Jan 29 09:27:10 crc kubenswrapper[4895]: I0129 09:27:10.194220 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-659bffd789-lt6hz_5af74c68-7b32-4db6-97b7-35cdcd2e9504/webhook-server/0.log" Jan 29 09:27:10 crc kubenswrapper[4895]: I0129 09:27:10.339865 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vpgqh_622e6489-4886-4658-b155-3c0d9cf63fbb/kube-rbac-proxy/0.log" Jan 29 09:27:10 crc kubenswrapper[4895]: I0129 09:27:10.518180 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fhh6k_62a11870-51ea-475c-82c9-e8db645c1284/frr/0.log" Jan 29 09:27:10 crc kubenswrapper[4895]: I0129 09:27:10.837441 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vpgqh_622e6489-4886-4658-b155-3c0d9cf63fbb/speaker/0.log" Jan 29 09:27:16 crc kubenswrapper[4895]: I0129 09:27:16.020882 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:27:16 crc kubenswrapper[4895]: I0129 09:27:16.021654 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:27:24 crc kubenswrapper[4895]: I0129 09:27:24.885230 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh_0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709/util/0.log" Jan 29 09:27:25 crc kubenswrapper[4895]: I0129 09:27:25.136473 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh_0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709/util/0.log" Jan 29 09:27:25 crc kubenswrapper[4895]: I0129 09:27:25.137626 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh_0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709/pull/0.log" Jan 29 09:27:25 crc kubenswrapper[4895]: I0129 09:27:25.197901 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh_0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709/pull/0.log" Jan 29 09:27:25 crc kubenswrapper[4895]: I0129 09:27:25.411376 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh_0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709/extract/0.log" Jan 29 09:27:25 crc kubenswrapper[4895]: I0129 09:27:25.420272 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh_0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709/pull/0.log" Jan 29 09:27:25 crc kubenswrapper[4895]: I0129 09:27:25.435988 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4cjbh_0e2d8a1c-4fc2-4f7e-b145-41ea07ba5709/util/0.log" Jan 29 09:27:25 crc kubenswrapper[4895]: I0129 09:27:25.647875 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt_31f401a0-5ab9-427e-a086-8099fe28462f/util/0.log" Jan 29 09:27:25 crc kubenswrapper[4895]: I0129 09:27:25.827994 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt_31f401a0-5ab9-427e-a086-8099fe28462f/util/0.log" Jan 29 09:27:25 crc kubenswrapper[4895]: I0129 09:27:25.848409 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt_31f401a0-5ab9-427e-a086-8099fe28462f/pull/0.log" Jan 29 09:27:25 crc kubenswrapper[4895]: I0129 09:27:25.945856 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt_31f401a0-5ab9-427e-a086-8099fe28462f/pull/0.log" Jan 29 09:27:26 crc kubenswrapper[4895]: I0129 09:27:26.149997 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt_31f401a0-5ab9-427e-a086-8099fe28462f/pull/0.log" Jan 29 09:27:26 crc kubenswrapper[4895]: I0129 09:27:26.155300 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt_31f401a0-5ab9-427e-a086-8099fe28462f/util/0.log" Jan 29 09:27:26 crc kubenswrapper[4895]: I0129 09:27:26.155936 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713wb4dt_31f401a0-5ab9-427e-a086-8099fe28462f/extract/0.log" Jan 29 09:27:26 crc kubenswrapper[4895]: I0129 09:27:26.368181 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsrdx_8276d7a1-1274-4d85-9243-ae6b7984ef52/extract-utilities/0.log" Jan 29 09:27:26 crc kubenswrapper[4895]: I0129 09:27:26.552439 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsrdx_8276d7a1-1274-4d85-9243-ae6b7984ef52/extract-utilities/0.log" Jan 29 09:27:26 crc kubenswrapper[4895]: I0129 09:27:26.623470 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsrdx_8276d7a1-1274-4d85-9243-ae6b7984ef52/extract-content/0.log" Jan 29 09:27:26 crc kubenswrapper[4895]: I0129 09:27:26.631091 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsrdx_8276d7a1-1274-4d85-9243-ae6b7984ef52/extract-content/0.log" Jan 29 09:27:26 crc kubenswrapper[4895]: I0129 09:27:26.824144 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsrdx_8276d7a1-1274-4d85-9243-ae6b7984ef52/extract-utilities/0.log" Jan 29 09:27:26 crc kubenswrapper[4895]: I0129 09:27:26.825122 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsrdx_8276d7a1-1274-4d85-9243-ae6b7984ef52/extract-content/0.log" Jan 29 09:27:27 crc kubenswrapper[4895]: I0129 09:27:27.162112 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4jxzr_399e86e5-8d5a-4663-8ce4-a919dd6f6333/extract-utilities/0.log" Jan 29 09:27:27 crc kubenswrapper[4895]: I0129 09:27:27.339195 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsrdx_8276d7a1-1274-4d85-9243-ae6b7984ef52/registry-server/0.log" Jan 29 09:27:27 crc kubenswrapper[4895]: I0129 09:27:27.376245 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4jxzr_399e86e5-8d5a-4663-8ce4-a919dd6f6333/extract-utilities/0.log" Jan 29 09:27:27 crc kubenswrapper[4895]: I0129 09:27:27.383400 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4jxzr_399e86e5-8d5a-4663-8ce4-a919dd6f6333/extract-content/0.log" Jan 29 09:27:27 crc kubenswrapper[4895]: I0129 09:27:27.385051 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4jxzr_399e86e5-8d5a-4663-8ce4-a919dd6f6333/extract-content/0.log" Jan 29 09:27:27 crc kubenswrapper[4895]: I0129 09:27:27.648234 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4jxzr_399e86e5-8d5a-4663-8ce4-a919dd6f6333/extract-content/0.log" Jan 29 09:27:27 crc kubenswrapper[4895]: I0129 09:27:27.651072 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4jxzr_399e86e5-8d5a-4663-8ce4-a919dd6f6333/extract-utilities/0.log" Jan 29 09:27:27 crc kubenswrapper[4895]: I0129 09:27:27.946339 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4jxzr_399e86e5-8d5a-4663-8ce4-a919dd6f6333/registry-server/0.log" Jan 29 09:27:28 crc kubenswrapper[4895]: I0129 09:27:28.143909 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cdbdn_763fcf96-02dd-48dd-a5b0-40714be2a672/marketplace-operator/0.log" Jan 29 09:27:28 crc kubenswrapper[4895]: I0129 09:27:28.280191 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btnwv_7274a4b5-8d6d-4743-b3ca-b1c3be13abbb/extract-utilities/0.log" Jan 29 09:27:28 crc kubenswrapper[4895]: I0129 09:27:28.451394 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btnwv_7274a4b5-8d6d-4743-b3ca-b1c3be13abbb/extract-utilities/0.log" Jan 29 09:27:28 crc kubenswrapper[4895]: I0129 09:27:28.507631 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btnwv_7274a4b5-8d6d-4743-b3ca-b1c3be13abbb/extract-content/0.log" Jan 29 09:27:28 crc kubenswrapper[4895]: I0129 09:27:28.525891 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btnwv_7274a4b5-8d6d-4743-b3ca-b1c3be13abbb/extract-content/0.log" Jan 29 09:27:28 crc kubenswrapper[4895]: I0129 09:27:28.791640 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btnwv_7274a4b5-8d6d-4743-b3ca-b1c3be13abbb/extract-content/0.log" Jan 29 09:27:28 crc kubenswrapper[4895]: I0129 09:27:28.826464 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btnwv_7274a4b5-8d6d-4743-b3ca-b1c3be13abbb/extract-utilities/0.log" Jan 29 09:27:28 crc kubenswrapper[4895]: I0129 09:27:28.926103 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-btnwv_7274a4b5-8d6d-4743-b3ca-b1c3be13abbb/registry-server/0.log" Jan 29 09:27:29 crc kubenswrapper[4895]: I0129 09:27:29.073477 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrslr_0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55/extract-utilities/0.log" Jan 29 09:27:29 crc kubenswrapper[4895]: I0129 09:27:29.276435 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrslr_0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55/extract-content/0.log" Jan 29 09:27:29 crc kubenswrapper[4895]: I0129 09:27:29.320613 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrslr_0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55/extract-utilities/0.log" Jan 29 09:27:29 crc kubenswrapper[4895]: I0129 09:27:29.339447 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrslr_0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55/extract-content/0.log" Jan 29 09:27:29 crc kubenswrapper[4895]: I0129 09:27:29.533261 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrslr_0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55/extract-content/0.log" Jan 29 09:27:29 crc kubenswrapper[4895]: I0129 09:27:29.535517 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrslr_0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55/extract-utilities/0.log" Jan 29 09:27:29 crc kubenswrapper[4895]: I0129 09:27:29.969740 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrslr_0c6ab8b9-4fbc-40f6-9a78-bfc18d82ba55/registry-server/0.log" Jan 29 09:27:46 crc kubenswrapper[4895]: I0129 09:27:46.021267 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:27:46 crc kubenswrapper[4895]: I0129 09:27:46.022141 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:27:46 crc kubenswrapper[4895]: I0129 09:27:46.022225 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 09:27:46 crc kubenswrapper[4895]: I0129 09:27:46.023084 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43d7dd8fbfd238a054c8f2b2f0b751543be50bb5b4e90095835c26e89f22f3db"} pod="openshift-machine-config-operator/machine-config-daemon-z82hk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:27:46 crc kubenswrapper[4895]: I0129 09:27:46.023175 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" containerID="cri-o://43d7dd8fbfd238a054c8f2b2f0b751543be50bb5b4e90095835c26e89f22f3db" gracePeriod=600 Jan 29 09:27:46 crc kubenswrapper[4895]: E0129 09:27:46.688641 4895 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.142:51868->38.129.56.142:46589: read tcp 38.129.56.142:51868->38.129.56.142:46589: read: connection reset by peer Jan 29 09:27:47 crc kubenswrapper[4895]: I0129 09:27:47.061432 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerID="43d7dd8fbfd238a054c8f2b2f0b751543be50bb5b4e90095835c26e89f22f3db" exitCode=0 Jan 29 09:27:47 crc kubenswrapper[4895]: I0129 09:27:47.061630 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerDied","Data":"43d7dd8fbfd238a054c8f2b2f0b751543be50bb5b4e90095835c26e89f22f3db"} Jan 29 09:27:47 crc kubenswrapper[4895]: I0129 09:27:47.061939 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerStarted","Data":"104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13"} Jan 29 09:27:47 crc kubenswrapper[4895]: I0129 09:27:47.061960 4895 scope.go:117] "RemoveContainer" containerID="79afccdc1e8d5197e8d3f95fee6f6b405fdadd258a1063bd56812bbb4492dbbc" Jan 29 09:29:27 crc kubenswrapper[4895]: I0129 09:29:27.471692 4895 generic.go:334] "Generic (PLEG): container finished" podID="f1dbd138-f7a8-41a1-9720-8d89c6276e2d" containerID="ed304213532749f097a3aff8cd36277133e7b7cc8afd3061fbdea936acdaa0a6" exitCode=0 Jan 29 09:29:27 crc kubenswrapper[4895]: I0129 09:29:27.471799 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwbt4/must-gather-msszk" event={"ID":"f1dbd138-f7a8-41a1-9720-8d89c6276e2d","Type":"ContainerDied","Data":"ed304213532749f097a3aff8cd36277133e7b7cc8afd3061fbdea936acdaa0a6"} Jan 29 09:29:27 crc kubenswrapper[4895]: I0129 09:29:27.473397 4895 scope.go:117] "RemoveContainer" containerID="ed304213532749f097a3aff8cd36277133e7b7cc8afd3061fbdea936acdaa0a6" Jan 29 09:29:27 crc kubenswrapper[4895]: I0129 09:29:27.594480 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vwbt4_must-gather-msszk_f1dbd138-f7a8-41a1-9720-8d89c6276e2d/gather/0.log" Jan 29 09:29:38 crc kubenswrapper[4895]: I0129 09:29:38.268233 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vwbt4/must-gather-msszk"] Jan 29 09:29:38 crc kubenswrapper[4895]: I0129 09:29:38.269603 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vwbt4/must-gather-msszk" podUID="f1dbd138-f7a8-41a1-9720-8d89c6276e2d" containerName="copy" containerID="cri-o://cedad95642f0e851f8987a53eaa54ffdc53a7d39e5524d6e65db0bccc7e65db7" gracePeriod=2 Jan 29 09:29:38 crc kubenswrapper[4895]: I0129 09:29:38.277038 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vwbt4/must-gather-msszk"] Jan 29 09:29:38 crc kubenswrapper[4895]: I0129 09:29:38.644471 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vwbt4_must-gather-msszk_f1dbd138-f7a8-41a1-9720-8d89c6276e2d/copy/0.log" Jan 29 09:29:38 crc kubenswrapper[4895]: I0129 09:29:38.649026 4895 generic.go:334] "Generic (PLEG): container finished" podID="f1dbd138-f7a8-41a1-9720-8d89c6276e2d" containerID="cedad95642f0e851f8987a53eaa54ffdc53a7d39e5524d6e65db0bccc7e65db7" exitCode=143 Jan 29 09:29:38 crc kubenswrapper[4895]: I0129 09:29:38.741544 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vwbt4_must-gather-msszk_f1dbd138-f7a8-41a1-9720-8d89c6276e2d/copy/0.log" Jan 29 09:29:38 crc kubenswrapper[4895]: I0129 09:29:38.741979 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/must-gather-msszk" Jan 29 09:29:38 crc kubenswrapper[4895]: I0129 09:29:38.929295 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lccmm\" (UniqueName: \"kubernetes.io/projected/f1dbd138-f7a8-41a1-9720-8d89c6276e2d-kube-api-access-lccmm\") pod \"f1dbd138-f7a8-41a1-9720-8d89c6276e2d\" (UID: \"f1dbd138-f7a8-41a1-9720-8d89c6276e2d\") " Jan 29 09:29:38 crc kubenswrapper[4895]: I0129 09:29:38.929392 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f1dbd138-f7a8-41a1-9720-8d89c6276e2d-must-gather-output\") pod \"f1dbd138-f7a8-41a1-9720-8d89c6276e2d\" (UID: \"f1dbd138-f7a8-41a1-9720-8d89c6276e2d\") " Jan 29 09:29:38 crc kubenswrapper[4895]: I0129 09:29:38.936595 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1dbd138-f7a8-41a1-9720-8d89c6276e2d-kube-api-access-lccmm" (OuterVolumeSpecName: "kube-api-access-lccmm") pod "f1dbd138-f7a8-41a1-9720-8d89c6276e2d" (UID: "f1dbd138-f7a8-41a1-9720-8d89c6276e2d"). InnerVolumeSpecName "kube-api-access-lccmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:39 crc kubenswrapper[4895]: I0129 09:29:39.033032 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lccmm\" (UniqueName: \"kubernetes.io/projected/f1dbd138-f7a8-41a1-9720-8d89c6276e2d-kube-api-access-lccmm\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:39 crc kubenswrapper[4895]: I0129 09:29:39.090023 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1dbd138-f7a8-41a1-9720-8d89c6276e2d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f1dbd138-f7a8-41a1-9720-8d89c6276e2d" (UID: "f1dbd138-f7a8-41a1-9720-8d89c6276e2d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:39 crc kubenswrapper[4895]: I0129 09:29:39.135721 4895 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f1dbd138-f7a8-41a1-9720-8d89c6276e2d-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:39 crc kubenswrapper[4895]: I0129 09:29:39.375041 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1dbd138-f7a8-41a1-9720-8d89c6276e2d" path="/var/lib/kubelet/pods/f1dbd138-f7a8-41a1-9720-8d89c6276e2d/volumes" Jan 29 09:29:39 crc kubenswrapper[4895]: I0129 09:29:39.657705 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vwbt4_must-gather-msszk_f1dbd138-f7a8-41a1-9720-8d89c6276e2d/copy/0.log" Jan 29 09:29:39 crc kubenswrapper[4895]: I0129 09:29:39.658201 4895 scope.go:117] "RemoveContainer" containerID="cedad95642f0e851f8987a53eaa54ffdc53a7d39e5524d6e65db0bccc7e65db7" Jan 29 09:29:39 crc kubenswrapper[4895]: I0129 09:29:39.658263 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwbt4/must-gather-msszk" Jan 29 09:29:39 crc kubenswrapper[4895]: I0129 09:29:39.681272 4895 scope.go:117] "RemoveContainer" containerID="ed304213532749f097a3aff8cd36277133e7b7cc8afd3061fbdea936acdaa0a6" Jan 29 09:29:46 crc kubenswrapper[4895]: I0129 09:29:46.020552 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:29:46 crc kubenswrapper[4895]: I0129 09:29:46.021364 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.166859 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht"] Jan 29 09:30:00 crc kubenswrapper[4895]: E0129 09:30:00.168801 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca11af0c-41c1-4c97-9d03-bcd373a67f66" containerName="registry-server" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.168825 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca11af0c-41c1-4c97-9d03-bcd373a67f66" containerName="registry-server" Jan 29 09:30:00 crc kubenswrapper[4895]: E0129 09:30:00.168849 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee8343f-d847-4168-8d96-0028e2702af9" containerName="extract-utilities" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.168859 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee8343f-d847-4168-8d96-0028e2702af9" containerName="extract-utilities" Jan 29 09:30:00 crc kubenswrapper[4895]: E0129 09:30:00.168884 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1dbd138-f7a8-41a1-9720-8d89c6276e2d" containerName="gather" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.168897 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1dbd138-f7a8-41a1-9720-8d89c6276e2d" containerName="gather" Jan 29 09:30:00 crc kubenswrapper[4895]: E0129 09:30:00.168912 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee8343f-d847-4168-8d96-0028e2702af9" containerName="extract-content" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.168940 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee8343f-d847-4168-8d96-0028e2702af9" containerName="extract-content" Jan 29 09:30:00 crc kubenswrapper[4895]: E0129 09:30:00.168972 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca11af0c-41c1-4c97-9d03-bcd373a67f66" containerName="extract-content" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.168980 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca11af0c-41c1-4c97-9d03-bcd373a67f66" containerName="extract-content" Jan 29 09:30:00 crc kubenswrapper[4895]: E0129 09:30:00.168992 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca11af0c-41c1-4c97-9d03-bcd373a67f66" containerName="extract-utilities" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.168999 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca11af0c-41c1-4c97-9d03-bcd373a67f66" containerName="extract-utilities" Jan 29 09:30:00 crc kubenswrapper[4895]: E0129 09:30:00.169014 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee8343f-d847-4168-8d96-0028e2702af9" containerName="registry-server" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.169021 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee8343f-d847-4168-8d96-0028e2702af9" containerName="registry-server" Jan 29 09:30:00 crc kubenswrapper[4895]: E0129 09:30:00.169054 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1dbd138-f7a8-41a1-9720-8d89c6276e2d" containerName="copy" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.169061 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1dbd138-f7a8-41a1-9720-8d89c6276e2d" containerName="copy" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.169308 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee8343f-d847-4168-8d96-0028e2702af9" containerName="registry-server" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.169329 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca11af0c-41c1-4c97-9d03-bcd373a67f66" containerName="registry-server" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.169367 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1dbd138-f7a8-41a1-9720-8d89c6276e2d" containerName="gather" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.169395 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1dbd138-f7a8-41a1-9720-8d89c6276e2d" containerName="copy" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.171228 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.177557 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.177857 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.203100 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht"] Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.262190 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cfeca41-6d1f-4b39-b44b-257967a14f9d-secret-volume\") pod \"collect-profiles-29494650-8m7ht\" (UID: \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.262283 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkht8\" (UniqueName: \"kubernetes.io/projected/6cfeca41-6d1f-4b39-b44b-257967a14f9d-kube-api-access-jkht8\") pod \"collect-profiles-29494650-8m7ht\" (UID: \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.262428 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cfeca41-6d1f-4b39-b44b-257967a14f9d-config-volume\") pod \"collect-profiles-29494650-8m7ht\" (UID: \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.364832 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cfeca41-6d1f-4b39-b44b-257967a14f9d-config-volume\") pod \"collect-profiles-29494650-8m7ht\" (UID: \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.365065 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cfeca41-6d1f-4b39-b44b-257967a14f9d-secret-volume\") pod \"collect-profiles-29494650-8m7ht\" (UID: \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.365152 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkht8\" (UniqueName: \"kubernetes.io/projected/6cfeca41-6d1f-4b39-b44b-257967a14f9d-kube-api-access-jkht8\") pod \"collect-profiles-29494650-8m7ht\" (UID: \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.366236 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cfeca41-6d1f-4b39-b44b-257967a14f9d-config-volume\") pod \"collect-profiles-29494650-8m7ht\" (UID: \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.374816 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cfeca41-6d1f-4b39-b44b-257967a14f9d-secret-volume\") pod \"collect-profiles-29494650-8m7ht\" (UID: \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.395494 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkht8\" (UniqueName: \"kubernetes.io/projected/6cfeca41-6d1f-4b39-b44b-257967a14f9d-kube-api-access-jkht8\") pod \"collect-profiles-29494650-8m7ht\" (UID: \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.507732 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" Jan 29 09:30:00 crc kubenswrapper[4895]: I0129 09:30:00.857093 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht"] Jan 29 09:30:01 crc kubenswrapper[4895]: I0129 09:30:01.872137 4895 generic.go:334] "Generic (PLEG): container finished" podID="6cfeca41-6d1f-4b39-b44b-257967a14f9d" containerID="75bd7cde2bc218c83fa91a9cf9fd4f8b946d1101d3c11be3fa669a27e5bb2b84" exitCode=0 Jan 29 09:30:01 crc kubenswrapper[4895]: I0129 09:30:01.872236 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" event={"ID":"6cfeca41-6d1f-4b39-b44b-257967a14f9d","Type":"ContainerDied","Data":"75bd7cde2bc218c83fa91a9cf9fd4f8b946d1101d3c11be3fa669a27e5bb2b84"} Jan 29 09:30:01 crc kubenswrapper[4895]: I0129 09:30:01.872554 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" event={"ID":"6cfeca41-6d1f-4b39-b44b-257967a14f9d","Type":"ContainerStarted","Data":"1a49519535eb4a4cad0b0c58114ff025a98828cca9722596c3de3781f3e7257e"} Jan 29 09:30:03 crc kubenswrapper[4895]: I0129 09:30:03.249288 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" Jan 29 09:30:03 crc kubenswrapper[4895]: I0129 09:30:03.364371 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkht8\" (UniqueName: \"kubernetes.io/projected/6cfeca41-6d1f-4b39-b44b-257967a14f9d-kube-api-access-jkht8\") pod \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\" (UID: \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\") " Jan 29 09:30:03 crc kubenswrapper[4895]: I0129 09:30:03.364557 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cfeca41-6d1f-4b39-b44b-257967a14f9d-config-volume\") pod \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\" (UID: \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\") " Jan 29 09:30:03 crc kubenswrapper[4895]: I0129 09:30:03.364714 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cfeca41-6d1f-4b39-b44b-257967a14f9d-secret-volume\") pod \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\" (UID: \"6cfeca41-6d1f-4b39-b44b-257967a14f9d\") " Jan 29 09:30:03 crc kubenswrapper[4895]: I0129 09:30:03.365489 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cfeca41-6d1f-4b39-b44b-257967a14f9d-config-volume" (OuterVolumeSpecName: "config-volume") pod "6cfeca41-6d1f-4b39-b44b-257967a14f9d" (UID: "6cfeca41-6d1f-4b39-b44b-257967a14f9d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:30:03 crc kubenswrapper[4895]: I0129 09:30:03.378250 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cfeca41-6d1f-4b39-b44b-257967a14f9d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6cfeca41-6d1f-4b39-b44b-257967a14f9d" (UID: "6cfeca41-6d1f-4b39-b44b-257967a14f9d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:03 crc kubenswrapper[4895]: I0129 09:30:03.378442 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cfeca41-6d1f-4b39-b44b-257967a14f9d-kube-api-access-jkht8" (OuterVolumeSpecName: "kube-api-access-jkht8") pod "6cfeca41-6d1f-4b39-b44b-257967a14f9d" (UID: "6cfeca41-6d1f-4b39-b44b-257967a14f9d"). InnerVolumeSpecName "kube-api-access-jkht8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:30:03 crc kubenswrapper[4895]: I0129 09:30:03.467970 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cfeca41-6d1f-4b39-b44b-257967a14f9d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:03 crc kubenswrapper[4895]: I0129 09:30:03.468019 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkht8\" (UniqueName: \"kubernetes.io/projected/6cfeca41-6d1f-4b39-b44b-257967a14f9d-kube-api-access-jkht8\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:03 crc kubenswrapper[4895]: I0129 09:30:03.468029 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cfeca41-6d1f-4b39-b44b-257967a14f9d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:03 crc kubenswrapper[4895]: I0129 09:30:03.896405 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" event={"ID":"6cfeca41-6d1f-4b39-b44b-257967a14f9d","Type":"ContainerDied","Data":"1a49519535eb4a4cad0b0c58114ff025a98828cca9722596c3de3781f3e7257e"} Jan 29 09:30:03 crc kubenswrapper[4895]: I0129 09:30:03.896485 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a49519535eb4a4cad0b0c58114ff025a98828cca9722596c3de3781f3e7257e" Jan 29 09:30:03 crc kubenswrapper[4895]: I0129 09:30:03.896608 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-8m7ht" Jan 29 09:30:04 crc kubenswrapper[4895]: I0129 09:30:04.335592 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp"] Jan 29 09:30:04 crc kubenswrapper[4895]: I0129 09:30:04.342842 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-qhdtp"] Jan 29 09:30:05 crc kubenswrapper[4895]: I0129 09:30:05.229840 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf95bf0-4949-44c6-9387-7fa2d4cf2b56" path="/var/lib/kubelet/pods/2cf95bf0-4949-44c6-9387-7fa2d4cf2b56/volumes" Jan 29 09:30:07 crc kubenswrapper[4895]: I0129 09:30:07.234691 4895 scope.go:117] "RemoveContainer" containerID="9803624622e3994dec96327f22fb99716cccf859b6cac0b1de5e84ff5e3b9a16" Jan 29 09:30:16 crc kubenswrapper[4895]: I0129 09:30:16.020672 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:30:16 crc kubenswrapper[4895]: I0129 09:30:16.021480 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:30:46 crc kubenswrapper[4895]: I0129 09:30:46.020935 4895 patch_prober.go:28] interesting pod/machine-config-daemon-z82hk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:30:46 crc kubenswrapper[4895]: I0129 09:30:46.021660 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:30:46 crc kubenswrapper[4895]: I0129 09:30:46.021731 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" Jan 29 09:30:46 crc kubenswrapper[4895]: I0129 09:30:46.022733 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13"} pod="openshift-machine-config-operator/machine-config-daemon-z82hk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:30:46 crc kubenswrapper[4895]: I0129 09:30:46.022806 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerName="machine-config-daemon" containerID="cri-o://104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" gracePeriod=600 Jan 29 09:30:46 crc kubenswrapper[4895]: E0129 09:30:46.147756 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:30:46 crc kubenswrapper[4895]: I0129 09:30:46.321664 4895 generic.go:334] "Generic (PLEG): container finished" podID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" exitCode=0 Jan 29 09:30:46 crc kubenswrapper[4895]: I0129 09:30:46.321715 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" event={"ID":"a4a4bd95-f02a-4617-9aa4-febfa6bee92b","Type":"ContainerDied","Data":"104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13"} Jan 29 09:30:46 crc kubenswrapper[4895]: I0129 09:30:46.321757 4895 scope.go:117] "RemoveContainer" containerID="43d7dd8fbfd238a054c8f2b2f0b751543be50bb5b4e90095835c26e89f22f3db" Jan 29 09:30:46 crc kubenswrapper[4895]: I0129 09:30:46.322716 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:30:46 crc kubenswrapper[4895]: E0129 09:30:46.323070 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:31:00 crc kubenswrapper[4895]: I0129 09:31:00.212368 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:31:00 crc kubenswrapper[4895]: E0129 09:31:00.213439 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:31:13 crc kubenswrapper[4895]: I0129 09:31:13.212594 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:31:13 crc kubenswrapper[4895]: E0129 09:31:13.213846 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:31:24 crc kubenswrapper[4895]: I0129 09:31:24.211661 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:31:24 crc kubenswrapper[4895]: E0129 09:31:24.212827 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:31:38 crc kubenswrapper[4895]: I0129 09:31:38.213250 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:31:38 crc kubenswrapper[4895]: E0129 09:31:38.214987 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:31:51 crc kubenswrapper[4895]: I0129 09:31:51.212453 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:31:51 crc kubenswrapper[4895]: E0129 09:31:51.213575 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:32:04 crc kubenswrapper[4895]: I0129 09:32:04.212058 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:32:04 crc kubenswrapper[4895]: E0129 09:32:04.213094 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:32:18 crc kubenswrapper[4895]: I0129 09:32:18.213732 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:32:18 crc kubenswrapper[4895]: E0129 09:32:18.215074 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:32:31 crc kubenswrapper[4895]: I0129 09:32:31.217824 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:32:31 crc kubenswrapper[4895]: E0129 09:32:31.220632 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:32:45 crc kubenswrapper[4895]: I0129 09:32:45.211682 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:32:45 crc kubenswrapper[4895]: E0129 09:32:45.212709 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:32:56 crc kubenswrapper[4895]: I0129 09:32:56.212539 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:32:56 crc kubenswrapper[4895]: E0129 09:32:56.213461 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:33:07 crc kubenswrapper[4895]: I0129 09:33:07.211600 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:33:07 crc kubenswrapper[4895]: E0129 09:33:07.212757 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:33:18 crc kubenswrapper[4895]: I0129 09:33:18.212069 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:33:18 crc kubenswrapper[4895]: E0129 09:33:18.213691 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:33:33 crc kubenswrapper[4895]: I0129 09:33:33.212778 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:33:33 crc kubenswrapper[4895]: E0129 09:33:33.214248 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b" Jan 29 09:33:47 crc kubenswrapper[4895]: I0129 09:33:47.219229 4895 scope.go:117] "RemoveContainer" containerID="104311eb4d3d23181b49989a6934509d196f994034560396f675a1987abdfb13" Jan 29 09:33:47 crc kubenswrapper[4895]: E0129 09:33:47.220177 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z82hk_openshift-machine-config-operator(a4a4bd95-f02a-4617-9aa4-febfa6bee92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-z82hk" podUID="a4a4bd95-f02a-4617-9aa4-febfa6bee92b"